May 8 05:19:26.059927 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed May 7 22:54:21 -00 2025 May 8 05:19:26.059957 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 05:19:26.059969 kernel: BIOS-provided physical RAM map: May 8 05:19:26.059978 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 8 05:19:26.059987 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 8 05:19:26.059998 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 05:19:26.060009 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 8 05:19:26.060018 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 8 05:19:26.060027 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 05:19:26.060036 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 05:19:26.060045 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 8 05:19:26.060053 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 05:19:26.060062 kernel: NX (Execute Disable) protection: active May 8 05:19:26.060071 kernel: APIC: Static calls initialized May 8 05:19:26.060084 kernel: SMBIOS 3.0.0 present. May 8 05:19:26.060094 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 8 05:19:26.060104 kernel: Hypervisor detected: KVM May 8 05:19:26.060113 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 05:19:26.060122 kernel: kvm-clock: using sched offset of 3526779790 cycles May 8 05:19:26.060135 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 05:19:26.060144 kernel: tsc: Detected 1996.249 MHz processor May 8 05:19:26.060154 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 05:19:26.060164 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 05:19:26.060174 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 8 05:19:26.060184 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 8 05:19:26.060193 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 05:19:26.060203 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 8 05:19:26.060213 kernel: ACPI: Early table checksum verification disabled May 8 05:19:26.060225 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 8 05:19:26.060235 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 05:19:26.060244 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 05:19:26.060254 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 05:19:26.060264 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 8 05:19:26.060273 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 05:19:26.060283 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 05:19:26.060293 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 8 05:19:26.060303 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 8 05:19:26.060316 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 8 05:19:26.060327 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 8 05:19:26.060336 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 8 05:19:26.060349 kernel: No NUMA configuration found May 8 05:19:26.060359 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 8 05:19:26.060368 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 8 05:19:26.060380 kernel: Zone ranges: May 8 05:19:26.060389 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 05:19:26.060399 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 8 05:19:26.060408 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 8 05:19:26.060418 kernel: Movable zone start for each node May 8 05:19:26.060427 kernel: Early memory node ranges May 8 05:19:26.060436 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 05:19:26.060446 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 8 05:19:26.060457 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 8 05:19:26.060467 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 8 05:19:26.060476 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 05:19:26.060485 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 05:19:26.060495 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 8 05:19:26.060504 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 05:19:26.060514 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 05:19:26.060523 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 05:19:26.060533 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 05:19:26.060545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 05:19:26.060555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 05:19:26.060564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 05:19:26.060573 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 05:19:26.060583 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 05:19:26.060592 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 8 05:19:26.060602 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 8 05:19:26.060611 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 8 05:19:26.060636 kernel: Booting paravirtualized kernel on KVM May 8 05:19:26.060649 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 05:19:26.060659 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 8 05:19:26.060669 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 May 8 05:19:26.060678 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 May 8 05:19:26.060688 kernel: pcpu-alloc: [0] 0 1 May 8 05:19:26.060697 kernel: kvm-guest: PV spinlocks disabled, no host support May 8 05:19:26.060708 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 05:19:26.060718 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 05:19:26.060730 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 05:19:26.060739 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 05:19:26.060749 kernel: Fallback order for Node 0: 0 May 8 05:19:26.060758 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 8 05:19:26.060768 kernel: Policy zone: Normal May 8 05:19:26.060777 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 05:19:26.060786 kernel: software IO TLB: area num 2. May 8 05:19:26.060796 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42856K init, 2336K bss, 227308K reserved, 0K cma-reserved) May 8 05:19:26.060806 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 05:19:26.060817 kernel: ftrace: allocating 37944 entries in 149 pages May 8 05:19:26.060827 kernel: ftrace: allocated 149 pages with 4 groups May 8 05:19:26.060836 kernel: Dynamic Preempt: voluntary May 8 05:19:26.060846 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 05:19:26.060856 kernel: rcu: RCU event tracing is enabled. May 8 05:19:26.060866 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 05:19:26.060876 kernel: Trampoline variant of Tasks RCU enabled. May 8 05:19:26.060885 kernel: Rude variant of Tasks RCU enabled. May 8 05:19:26.060895 kernel: Tracing variant of Tasks RCU enabled. May 8 05:19:26.060907 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 05:19:26.060916 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 05:19:26.060925 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 8 05:19:26.060935 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 05:19:26.060944 kernel: Console: colour VGA+ 80x25 May 8 05:19:26.060954 kernel: printk: console [tty0] enabled May 8 05:19:26.060963 kernel: printk: console [ttyS0] enabled May 8 05:19:26.060973 kernel: ACPI: Core revision 20230628 May 8 05:19:26.060982 kernel: APIC: Switch to symmetric I/O mode setup May 8 05:19:26.060993 kernel: x2apic enabled May 8 05:19:26.061003 kernel: APIC: Switched APIC routing to: physical x2apic May 8 05:19:26.061013 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 05:19:26.061022 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 05:19:26.061031 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 8 05:19:26.061041 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 8 05:19:26.061050 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 8 05:19:26.061060 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 05:19:26.061069 kernel: Spectre V2 : Mitigation: Retpolines May 8 05:19:26.061081 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 05:19:26.061090 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 05:19:26.061100 kernel: Speculative Store Bypass: Vulnerable May 8 05:19:26.061109 kernel: x86/fpu: x87 FPU will use FXSAVE May 8 05:19:26.061119 kernel: Freeing SMP alternatives memory: 32K May 8 05:19:26.061135 kernel: pid_max: default: 32768 minimum: 301 May 8 05:19:26.061148 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 05:19:26.061158 kernel: landlock: Up and running. May 8 05:19:26.061167 kernel: SELinux: Initializing. May 8 05:19:26.061177 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 05:19:26.061188 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 05:19:26.061198 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 8 05:19:26.061210 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 05:19:26.061220 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 05:19:26.061231 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 05:19:26.061241 kernel: Performance Events: AMD PMU driver. May 8 05:19:26.061250 kernel: ... version: 0 May 8 05:19:26.061262 kernel: ... bit width: 48 May 8 05:19:26.061272 kernel: ... generic registers: 4 May 8 05:19:26.061282 kernel: ... value mask: 0000ffffffffffff May 8 05:19:26.061292 kernel: ... max period: 00007fffffffffff May 8 05:19:26.061302 kernel: ... fixed-purpose events: 0 May 8 05:19:26.061312 kernel: ... event mask: 000000000000000f May 8 05:19:26.061322 kernel: signal: max sigframe size: 1440 May 8 05:19:26.061332 kernel: rcu: Hierarchical SRCU implementation. May 8 05:19:26.061342 kernel: rcu: Max phase no-delay instances is 400. May 8 05:19:26.061354 kernel: smp: Bringing up secondary CPUs ... May 8 05:19:26.061364 kernel: smpboot: x86: Booting SMP configuration: May 8 05:19:26.061374 kernel: .... node #0, CPUs: #1 May 8 05:19:26.061384 kernel: smp: Brought up 1 node, 2 CPUs May 8 05:19:26.061393 kernel: smpboot: Max logical packages: 2 May 8 05:19:26.061403 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 8 05:19:26.061413 kernel: devtmpfs: initialized May 8 05:19:26.061423 kernel: x86/mm: Memory block size: 128MB May 8 05:19:26.061433 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 05:19:26.061445 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 05:19:26.061455 kernel: pinctrl core: initialized pinctrl subsystem May 8 05:19:26.061465 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 05:19:26.061476 kernel: audit: initializing netlink subsys (disabled) May 8 05:19:26.061486 kernel: audit: type=2000 audit(1746681565.316:1): state=initialized audit_enabled=0 res=1 May 8 05:19:26.061496 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 05:19:26.061506 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 05:19:26.061515 kernel: cpuidle: using governor menu May 8 05:19:26.061528 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 05:19:26.061544 kernel: dca service started, version 1.12.1 May 8 05:19:26.061554 kernel: PCI: Using configuration type 1 for base access May 8 05:19:26.061563 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 05:19:26.061572 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 05:19:26.061581 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 8 05:19:26.061590 kernel: ACPI: Added _OSI(Module Device) May 8 05:19:26.061599 kernel: ACPI: Added _OSI(Processor Device) May 8 05:19:26.061608 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 05:19:26.061617 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 05:19:26.063655 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 05:19:26.063668 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 8 05:19:26.063678 kernel: ACPI: Interpreter enabled May 8 05:19:26.063688 kernel: ACPI: PM: (supports S0 S3 S5) May 8 05:19:26.063698 kernel: ACPI: Using IOAPIC for interrupt routing May 8 05:19:26.063708 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 05:19:26.063718 kernel: PCI: Using E820 reservations for host bridge windows May 8 05:19:26.063727 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 8 05:19:26.063737 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 05:19:26.063894 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 8 05:19:26.064144 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 8 05:19:26.064247 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 8 05:19:26.064262 kernel: acpiphp: Slot [3] registered May 8 05:19:26.064272 kernel: acpiphp: Slot [4] registered May 8 05:19:26.064281 kernel: acpiphp: Slot [5] registered May 8 05:19:26.064291 kernel: acpiphp: Slot [6] registered May 8 05:19:26.064305 kernel: acpiphp: Slot [7] registered May 8 05:19:26.064314 kernel: acpiphp: Slot [8] registered May 8 05:19:26.064323 kernel: acpiphp: Slot [9] registered May 8 05:19:26.064333 kernel: acpiphp: Slot [10] registered May 8 05:19:26.064342 kernel: acpiphp: Slot [11] registered May 8 05:19:26.064352 kernel: acpiphp: Slot [12] registered May 8 05:19:26.064361 kernel: acpiphp: Slot [13] registered May 8 05:19:26.064371 kernel: acpiphp: Slot [14] registered May 8 05:19:26.064380 kernel: acpiphp: Slot [15] registered May 8 05:19:26.064389 kernel: acpiphp: Slot [16] registered May 8 05:19:26.064401 kernel: acpiphp: Slot [17] registered May 8 05:19:26.064411 kernel: acpiphp: Slot [18] registered May 8 05:19:26.064420 kernel: acpiphp: Slot [19] registered May 8 05:19:26.064429 kernel: acpiphp: Slot [20] registered May 8 05:19:26.064438 kernel: acpiphp: Slot [21] registered May 8 05:19:26.064448 kernel: acpiphp: Slot [22] registered May 8 05:19:26.064457 kernel: acpiphp: Slot [23] registered May 8 05:19:26.064467 kernel: acpiphp: Slot [24] registered May 8 05:19:26.064476 kernel: acpiphp: Slot [25] registered May 8 05:19:26.064487 kernel: acpiphp: Slot [26] registered May 8 05:19:26.064497 kernel: acpiphp: Slot [27] registered May 8 05:19:26.064506 kernel: acpiphp: Slot [28] registered May 8 05:19:26.064516 kernel: acpiphp: Slot [29] registered May 8 05:19:26.064525 kernel: acpiphp: Slot [30] registered May 8 05:19:26.064534 kernel: acpiphp: Slot [31] registered May 8 05:19:26.064544 kernel: PCI host bridge to bus 0000:00 May 8 05:19:26.064665 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 05:19:26.064761 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 05:19:26.064855 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 05:19:26.064943 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 05:19:26.065037 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 8 05:19:26.065124 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 05:19:26.065236 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 8 05:19:26.065347 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 8 05:19:26.065453 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 8 05:19:26.065546 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 8 05:19:26.067693 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 05:19:26.067803 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 05:19:26.067901 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 05:19:26.067999 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 05:19:26.068104 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 8 05:19:26.068207 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 8 05:19:26.068304 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 8 05:19:26.068409 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 8 05:19:26.068507 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 8 05:19:26.068604 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 8 05:19:26.068724 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 8 05:19:26.068827 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 8 05:19:26.068924 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 05:19:26.069034 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 8 05:19:26.069134 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 8 05:19:26.069234 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 8 05:19:26.069331 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 8 05:19:26.069428 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 8 05:19:26.069541 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 8 05:19:26.072701 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 8 05:19:26.072812 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 8 05:19:26.072905 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 8 05:19:26.073005 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 8 05:19:26.073098 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 8 05:19:26.073191 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 8 05:19:26.073297 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 8 05:19:26.073390 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 8 05:19:26.073482 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 8 05:19:26.073575 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 8 05:19:26.073589 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 05:19:26.073598 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 05:19:26.073608 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 05:19:26.073616 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 05:19:26.073645 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 8 05:19:26.073654 kernel: iommu: Default domain type: Translated May 8 05:19:26.073663 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 05:19:26.073672 kernel: PCI: Using ACPI for IRQ routing May 8 05:19:26.073681 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 05:19:26.073690 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 8 05:19:26.073699 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 8 05:19:26.073793 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 8 05:19:26.073887 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 8 05:19:26.073985 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 05:19:26.073999 kernel: vgaarb: loaded May 8 05:19:26.074008 kernel: clocksource: Switched to clocksource kvm-clock May 8 05:19:26.074031 kernel: VFS: Disk quotas dquot_6.6.0 May 8 05:19:26.074050 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 05:19:26.074089 kernel: pnp: PnP ACPI init May 8 05:19:26.074228 kernel: pnp 00:03: [dma 2] May 8 05:19:26.074244 kernel: pnp: PnP ACPI: found 5 devices May 8 05:19:26.074258 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 05:19:26.074267 kernel: NET: Registered PF_INET protocol family May 8 05:19:26.074276 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 05:19:26.074285 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 05:19:26.074295 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 05:19:26.074304 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 05:19:26.074313 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 05:19:26.074322 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 05:19:26.074331 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 05:19:26.074342 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 05:19:26.074351 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 05:19:26.074360 kernel: NET: Registered PF_XDP protocol family May 8 05:19:26.074446 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 05:19:26.074551 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 05:19:26.076668 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 05:19:26.076755 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 8 05:19:26.076836 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 8 05:19:26.076929 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 8 05:19:26.077029 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 05:19:26.077044 kernel: PCI: CLS 0 bytes, default 64 May 8 05:19:26.077054 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 8 05:19:26.077063 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 8 05:19:26.077072 kernel: Initialise system trusted keyrings May 8 05:19:26.077081 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 05:19:26.077090 kernel: Key type asymmetric registered May 8 05:19:26.077099 kernel: Asymmetric key parser 'x509' registered May 8 05:19:26.077111 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 8 05:19:26.077120 kernel: io scheduler mq-deadline registered May 8 05:19:26.077129 kernel: io scheduler kyber registered May 8 05:19:26.077138 kernel: io scheduler bfq registered May 8 05:19:26.077147 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 05:19:26.077157 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 8 05:19:26.077166 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 8 05:19:26.077175 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 8 05:19:26.077184 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 8 05:19:26.077196 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 05:19:26.077205 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 05:19:26.077214 kernel: random: crng init done May 8 05:19:26.077223 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 05:19:26.077232 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 05:19:26.077285 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 05:19:26.077454 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 05:19:26.077472 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 05:19:26.077565 kernel: rtc_cmos 00:04: registered as rtc0 May 8 05:19:26.077681 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T05:19:25 UTC (1746681565) May 8 05:19:26.077782 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 8 05:19:26.077796 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 8 05:19:26.077806 kernel: NET: Registered PF_INET6 protocol family May 8 05:19:26.077816 kernel: Segment Routing with IPv6 May 8 05:19:26.077826 kernel: In-situ OAM (IOAM) with IPv6 May 8 05:19:26.077836 kernel: NET: Registered PF_PACKET protocol family May 8 05:19:26.077845 kernel: Key type dns_resolver registered May 8 05:19:26.077860 kernel: IPI shorthand broadcast: enabled May 8 05:19:26.077869 kernel: sched_clock: Marking stable (1031007470, 174377453)->(1245300050, -39915127) May 8 05:19:26.077879 kernel: registered taskstats version 1 May 8 05:19:26.077889 kernel: Loading compiled-in X.509 certificates May 8 05:19:26.077899 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 75e4e434c57439d3f2eaf7797bbbcdd698dafd0e' May 8 05:19:26.077908 kernel: Key type .fscrypt registered May 8 05:19:26.077918 kernel: Key type fscrypt-provisioning registered May 8 05:19:26.077927 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 05:19:26.077939 kernel: ima: Allocated hash algorithm: sha1 May 8 05:19:26.077949 kernel: ima: No architecture policies found May 8 05:19:26.077959 kernel: clk: Disabling unused clocks May 8 05:19:26.077968 kernel: Freeing unused kernel image (initmem) memory: 42856K May 8 05:19:26.077978 kernel: Write protecting the kernel read-only data: 36864k May 8 05:19:26.077988 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 8 05:19:26.077997 kernel: Run /init as init process May 8 05:19:26.078007 kernel: with arguments: May 8 05:19:26.078016 kernel: /init May 8 05:19:26.078026 kernel: with environment: May 8 05:19:26.078037 kernel: HOME=/ May 8 05:19:26.078046 kernel: TERM=linux May 8 05:19:26.078056 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 05:19:26.078068 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 05:19:26.078081 systemd[1]: Detected virtualization kvm. May 8 05:19:26.078092 systemd[1]: Detected architecture x86-64. May 8 05:19:26.078102 systemd[1]: Running in initrd. May 8 05:19:26.078114 systemd[1]: No hostname configured, using default hostname. May 8 05:19:26.078124 systemd[1]: Hostname set to . May 8 05:19:26.078135 systemd[1]: Initializing machine ID from VM UUID. May 8 05:19:26.078145 systemd[1]: Queued start job for default target initrd.target. May 8 05:19:26.078156 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 05:19:26.078166 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 05:19:26.078178 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 05:19:26.078198 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 05:19:26.078210 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 05:19:26.078221 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 05:19:26.078234 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 05:19:26.078245 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 05:19:26.078258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 05:19:26.078268 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 05:19:26.078309 systemd[1]: Reached target paths.target - Path Units. May 8 05:19:26.078321 systemd[1]: Reached target slices.target - Slice Units. May 8 05:19:26.078331 systemd[1]: Reached target swap.target - Swaps. May 8 05:19:26.078346 systemd[1]: Reached target timers.target - Timer Units. May 8 05:19:26.078359 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 05:19:26.078370 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 05:19:26.078380 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 05:19:26.078392 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 05:19:26.078402 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 05:19:26.078412 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 05:19:26.078423 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 05:19:26.078433 systemd[1]: Reached target sockets.target - Socket Units. May 8 05:19:26.078442 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 05:19:26.078452 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 05:19:26.078462 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 05:19:26.078474 systemd[1]: Starting systemd-fsck-usr.service... May 8 05:19:26.078484 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 05:19:26.078494 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 05:19:26.078519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 05:19:26.078549 systemd-journald[184]: Collecting audit messages is disabled. May 8 05:19:26.078577 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 05:19:26.078587 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 05:19:26.078597 systemd[1]: Finished systemd-fsck-usr.service. May 8 05:19:26.078612 systemd-journald[184]: Journal started May 8 05:19:26.080661 systemd-journald[184]: Runtime Journal (/run/log/journal/7e617e5c8ab94ad3a94a04e42ff615db) is 8.0M, max 78.3M, 70.3M free. May 8 05:19:26.076106 systemd-modules-load[185]: Inserted module 'overlay' May 8 05:19:26.122373 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 05:19:26.122396 kernel: Bridge firewalling registered May 8 05:19:26.122407 systemd[1]: Started systemd-journald.service - Journal Service. May 8 05:19:26.105057 systemd-modules-load[185]: Inserted module 'br_netfilter' May 8 05:19:26.123537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 05:19:26.124522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 05:19:26.134771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 05:19:26.136814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 05:19:26.139749 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 05:19:26.143185 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 05:19:26.155127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 05:19:26.160882 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 05:19:26.162752 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 05:19:26.164169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 05:19:26.169750 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 05:19:26.171750 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 05:19:26.174947 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 05:19:26.194126 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 05:19:26.196164 dracut-cmdline[214]: dracut-dracut-053 May 8 05:19:26.202162 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=86cfbfcc89a9c46f6cbba5bdb3509d1ce1367f0c93b0b0e4c6bdcad1a2064c90 May 8 05:19:26.212539 systemd-resolved[215]: Positive Trust Anchors: May 8 05:19:26.212558 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 05:19:26.212599 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 05:19:26.215826 systemd-resolved[215]: Defaulting to hostname 'linux'. May 8 05:19:26.216887 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 05:19:26.219264 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 05:19:26.295717 kernel: SCSI subsystem initialized May 8 05:19:26.306683 kernel: Loading iSCSI transport class v2.0-870. May 8 05:19:26.318960 kernel: iscsi: registered transport (tcp) May 8 05:19:26.340920 kernel: iscsi: registered transport (qla4xxx) May 8 05:19:26.340987 kernel: QLogic iSCSI HBA Driver May 8 05:19:26.397565 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 05:19:26.405973 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 05:19:26.459428 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 05:19:26.459527 kernel: device-mapper: uevent: version 1.0.3 May 8 05:19:26.462836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 05:19:26.509736 kernel: raid6: sse2x4 gen() 12865 MB/s May 8 05:19:26.527699 kernel: raid6: sse2x2 gen() 14895 MB/s May 8 05:19:26.546537 kernel: raid6: sse2x1 gen() 9298 MB/s May 8 05:19:26.546604 kernel: raid6: using algorithm sse2x2 gen() 14895 MB/s May 8 05:19:26.565673 kernel: raid6: .... xor() 8935 MB/s, rmw enabled May 8 05:19:26.565744 kernel: raid6: using ssse3x2 recovery algorithm May 8 05:19:26.589209 kernel: xor: measuring software checksum speed May 8 05:19:26.589277 kernel: prefetch64-sse : 18350 MB/sec May 8 05:19:26.589728 kernel: generic_sse : 16860 MB/sec May 8 05:19:26.590835 kernel: xor: using function: prefetch64-sse (18350 MB/sec) May 8 05:19:26.774790 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 05:19:26.791007 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 05:19:26.796939 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 05:19:26.810355 systemd-udevd[402]: Using default interface naming scheme 'v255'. May 8 05:19:26.814742 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 05:19:26.825941 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 05:19:26.844804 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation May 8 05:19:26.888192 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 05:19:26.897892 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 05:19:26.978165 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 05:19:26.988985 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 05:19:27.034762 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 05:19:27.038617 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 05:19:27.040921 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 05:19:27.042367 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 05:19:27.050787 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 05:19:27.066813 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 05:19:27.075643 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 8 05:19:27.113205 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 8 05:19:27.113325 kernel: libata version 3.00 loaded. May 8 05:19:27.113339 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 05:19:27.113352 kernel: GPT:17805311 != 20971519 May 8 05:19:27.113369 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 05:19:27.113381 kernel: GPT:17805311 != 20971519 May 8 05:19:27.113391 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 05:19:27.113402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 05:19:27.093600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 05:19:27.093761 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 05:19:27.116954 kernel: ata_piix 0000:00:01.1: version 2.13 May 8 05:19:27.123888 kernel: scsi host0: ata_piix May 8 05:19:27.124076 kernel: scsi host1: ata_piix May 8 05:19:27.124183 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 8 05:19:27.124197 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 8 05:19:27.094449 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 05:19:27.095067 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 05:19:27.095250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 05:19:27.096692 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 05:19:27.107279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 05:19:27.164637 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (458) May 8 05:19:27.168601 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 8 05:19:27.200617 kernel: BTRFS: device fsid 28014d97-e6d7-4db4-b1d9-76a980e09972 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (454) May 8 05:19:27.206447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 05:19:27.222287 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 8 05:19:27.230876 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 05:19:27.235835 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 8 05:19:27.236518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 8 05:19:27.243783 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 05:19:27.246372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 05:19:27.268764 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 05:19:27.268360 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 05:19:27.270017 disk-uuid[506]: Primary Header is updated. May 8 05:19:27.270017 disk-uuid[506]: Secondary Entries is updated. May 8 05:19:27.270017 disk-uuid[506]: Secondary Header is updated. May 8 05:19:28.294800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 05:19:28.296417 disk-uuid[516]: The operation has completed successfully. May 8 05:19:28.368569 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 05:19:28.368858 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 05:19:28.414774 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 05:19:28.420107 sh[529]: Success May 8 05:19:28.450671 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 8 05:19:28.571732 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 05:19:28.575181 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 05:19:28.583806 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 05:19:28.629324 kernel: BTRFS info (device dm-0): first mount of filesystem 28014d97-e6d7-4db4-b1d9-76a980e09972 May 8 05:19:28.629399 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 8 05:19:28.634175 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 05:19:28.639186 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 05:19:28.642991 kernel: BTRFS info (device dm-0): using free space tree May 8 05:19:28.664062 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 05:19:28.666482 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 05:19:28.685024 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 05:19:28.691295 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 05:19:28.719708 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 05:19:28.719800 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 05:19:28.723943 kernel: BTRFS info (device vda6): using free space tree May 8 05:19:28.737683 kernel: BTRFS info (device vda6): auto enabling async discard May 8 05:19:28.758198 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 05:19:28.764210 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 05:19:28.780533 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 05:19:28.789644 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 05:19:28.824153 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 05:19:28.832927 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 05:19:28.854240 systemd-networkd[711]: lo: Link UP May 8 05:19:28.854252 systemd-networkd[711]: lo: Gained carrier May 8 05:19:28.855742 systemd-networkd[711]: Enumeration completed May 8 05:19:28.856147 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 05:19:28.856151 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 05:19:28.856160 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 05:19:28.857402 systemd-networkd[711]: eth0: Link UP May 8 05:19:28.857406 systemd-networkd[711]: eth0: Gained carrier May 8 05:19:28.857413 systemd-networkd[711]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 05:19:28.864970 systemd[1]: Reached target network.target - Network. May 8 05:19:28.868233 systemd-networkd[711]: eth0: DHCPv4 address 172.24.4.234/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 8 05:19:28.928751 ignition[643]: Ignition 2.19.0 May 8 05:19:28.928762 ignition[643]: Stage: fetch-offline May 8 05:19:28.928804 ignition[643]: no configs at "/usr/lib/ignition/base.d" May 8 05:19:28.931586 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 05:19:28.928815 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:19:28.928930 ignition[643]: parsed url from cmdline: "" May 8 05:19:28.928933 ignition[643]: no config URL provided May 8 05:19:28.928939 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" May 8 05:19:28.928947 ignition[643]: no config at "/usr/lib/ignition/user.ign" May 8 05:19:28.928951 ignition[643]: failed to fetch config: resource requires networking May 8 05:19:28.929167 ignition[643]: Ignition finished successfully May 8 05:19:28.943795 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 05:19:28.957850 ignition[721]: Ignition 2.19.0 May 8 05:19:28.957863 ignition[721]: Stage: fetch May 8 05:19:28.958049 ignition[721]: no configs at "/usr/lib/ignition/base.d" May 8 05:19:28.958062 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:19:28.958164 ignition[721]: parsed url from cmdline: "" May 8 05:19:28.958169 ignition[721]: no config URL provided May 8 05:19:28.958174 ignition[721]: reading system config file "/usr/lib/ignition/user.ign" May 8 05:19:28.958182 ignition[721]: no config at "/usr/lib/ignition/user.ign" May 8 05:19:28.958305 ignition[721]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 8 05:19:28.958436 ignition[721]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 8 05:19:28.958468 ignition[721]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 8 05:19:29.255910 ignition[721]: GET result: OK May 8 05:19:29.256097 ignition[721]: parsing config with SHA512: aa2a195cd58cb59ea5ecab1a808de83435bde5c557c00e49e6792a78bb66869bc17a6043bb8b2896c8170801e8cb2f48cdd0105492ddf0c7fc5cf27530476f28 May 8 05:19:29.264984 unknown[721]: fetched base config from "system" May 8 05:19:29.265010 unknown[721]: fetched base config from "system" May 8 05:19:29.266338 ignition[721]: fetch: fetch complete May 8 05:19:29.265024 unknown[721]: fetched user config from "openstack" May 8 05:19:29.266351 ignition[721]: fetch: fetch passed May 8 05:19:29.269917 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 05:19:29.266446 ignition[721]: Ignition finished successfully May 8 05:19:29.280079 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 05:19:29.316566 ignition[727]: Ignition 2.19.0 May 8 05:19:29.316593 ignition[727]: Stage: kargs May 8 05:19:29.317028 ignition[727]: no configs at "/usr/lib/ignition/base.d" May 8 05:19:29.317055 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:19:29.321496 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 05:19:29.319353 ignition[727]: kargs: kargs passed May 8 05:19:29.319453 ignition[727]: Ignition finished successfully May 8 05:19:29.337422 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 05:19:29.365951 ignition[734]: Ignition 2.19.0 May 8 05:19:29.365968 ignition[734]: Stage: disks May 8 05:19:29.366369 ignition[734]: no configs at "/usr/lib/ignition/base.d" May 8 05:19:29.366394 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:19:29.371073 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 05:19:29.368767 ignition[734]: disks: disks passed May 8 05:19:29.373611 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 05:19:29.368866 ignition[734]: Ignition finished successfully May 8 05:19:29.375844 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 05:19:29.378711 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 05:19:29.380990 systemd[1]: Reached target sysinit.target - System Initialization. May 8 05:19:29.383978 systemd[1]: Reached target basic.target - Basic System. May 8 05:19:29.397126 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 05:19:29.425332 systemd-fsck[742]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 8 05:19:29.435221 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 05:19:29.443855 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 05:19:29.625247 kernel: EXT4-fs (vda9): mounted filesystem 36960c89-ba45-4808-a41c-bf61ce9470a3 r/w with ordered data mode. Quota mode: none. May 8 05:19:29.627279 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 05:19:29.630695 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 05:19:29.643864 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 05:19:29.647490 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 05:19:29.648983 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 05:19:29.651239 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 8 05:19:29.654278 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 05:19:29.655273 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 05:19:29.671766 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (750) May 8 05:19:29.671824 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 05:19:29.671855 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 05:19:29.671884 kernel: BTRFS info (device vda6): using free space tree May 8 05:19:29.675192 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 05:19:29.677656 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 05:19:29.689336 kernel: BTRFS info (device vda6): auto enabling async discard May 8 05:19:29.688428 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 05:19:29.786879 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory May 8 05:19:29.794966 initrd-setup-root[785]: cut: /sysroot/etc/group: No such file or directory May 8 05:19:29.799966 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory May 8 05:19:29.805654 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory May 8 05:19:29.920470 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 05:19:29.927790 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 05:19:29.931831 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 05:19:29.937462 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 05:19:29.940559 kernel: BTRFS info (device vda6): last unmount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 05:19:29.971585 ignition[872]: INFO : Ignition 2.19.0 May 8 05:19:29.971585 ignition[872]: INFO : Stage: mount May 8 05:19:29.973466 ignition[872]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 05:19:29.973466 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:19:29.973466 ignition[872]: INFO : mount: mount passed May 8 05:19:29.973466 ignition[872]: INFO : Ignition finished successfully May 8 05:19:29.973882 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 05:19:29.975101 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 05:19:30.453917 systemd-networkd[711]: eth0: Gained IPv6LL May 8 05:19:36.893121 coreos-metadata[752]: May 08 05:19:36.893 WARN failed to locate config-drive, using the metadata service API instead May 8 05:19:36.934358 coreos-metadata[752]: May 08 05:19:36.934 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 8 05:19:36.949584 coreos-metadata[752]: May 08 05:19:36.949 INFO Fetch successful May 8 05:19:36.951103 coreos-metadata[752]: May 08 05:19:36.951 INFO wrote hostname ci-4081-3-3-n-e0f469a76e.novalocal to /sysroot/etc/hostname May 8 05:19:36.954512 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 8 05:19:36.954790 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 8 05:19:36.968828 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 05:19:36.989968 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 05:19:37.016708 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (889) May 8 05:19:37.025077 kernel: BTRFS info (device vda6): first mount of filesystem a884989d-7a9b-4fbd-878f-8ac586ff8595 May 8 05:19:37.025147 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 05:19:37.029270 kernel: BTRFS info (device vda6): using free space tree May 8 05:19:37.040757 kernel: BTRFS info (device vda6): auto enabling async discard May 8 05:19:37.046439 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 05:19:37.088698 ignition[907]: INFO : Ignition 2.19.0 May 8 05:19:37.088698 ignition[907]: INFO : Stage: files May 8 05:19:37.088698 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 05:19:37.088698 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:19:37.097430 ignition[907]: DEBUG : files: compiled without relabeling support, skipping May 8 05:19:37.099316 ignition[907]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 05:19:37.101243 ignition[907]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 05:19:37.108085 ignition[907]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 05:19:37.110052 ignition[907]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 05:19:37.110052 ignition[907]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 05:19:37.109311 unknown[907]: wrote ssh authorized keys file for user: core May 8 05:19:37.115525 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 05:19:37.115525 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 8 05:19:37.185529 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 05:19:37.772455 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 8 05:19:37.772455 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 05:19:37.777150 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 05:19:38.369072 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 8 05:19:40.636039 ignition[907]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 05:19:40.636039 ignition[907]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 8 05:19:40.638692 ignition[907]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 05:19:40.639695 ignition[907]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 05:19:40.639695 ignition[907]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 8 05:19:40.639695 ignition[907]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 8 05:19:40.639695 ignition[907]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 8 05:19:40.642922 ignition[907]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 05:19:40.642922 ignition[907]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 05:19:40.642922 ignition[907]: INFO : files: files passed May 8 05:19:40.642922 ignition[907]: INFO : Ignition finished successfully May 8 05:19:40.644750 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 05:19:40.657967 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 05:19:40.661943 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 05:19:40.670142 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 05:19:40.671004 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 05:19:40.695908 initrd-setup-root-after-ignition[939]: grep: May 8 05:19:40.695908 initrd-setup-root-after-ignition[935]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 05:19:40.695908 initrd-setup-root-after-ignition[935]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 05:19:40.698753 initrd-setup-root-after-ignition[939]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 05:19:40.700368 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 05:19:40.703580 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 05:19:40.711936 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 05:19:40.743002 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 05:19:40.743219 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 05:19:40.745515 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 05:19:40.747182 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 05:19:40.757244 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 05:19:40.768044 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 05:19:40.799204 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 05:19:40.807901 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 05:19:40.842599 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 05:19:40.844367 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 05:19:40.847311 systemd[1]: Stopped target timers.target - Timer Units. May 8 05:19:40.850021 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 05:19:40.850301 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 05:19:40.853411 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 05:19:40.855348 systemd[1]: Stopped target basic.target - Basic System. May 8 05:19:40.858082 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 05:19:40.860608 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 05:19:40.863080 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 05:19:40.865912 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 05:19:40.868755 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 05:19:40.871723 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 05:19:40.874726 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 05:19:40.877588 systemd[1]: Stopped target swap.target - Swaps. May 8 05:19:40.880504 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 05:19:40.880853 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 05:19:40.883854 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 05:19:40.885708 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 05:19:40.888163 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 05:19:40.890617 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 05:19:40.892823 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 05:19:40.893097 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 05:19:40.896611 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 05:19:40.896965 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 05:19:40.898791 systemd[1]: ignition-files.service: Deactivated successfully. May 8 05:19:40.899053 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 05:19:40.910142 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 05:19:40.918892 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 05:19:40.923876 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 05:19:40.924237 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 05:19:40.928527 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 05:19:40.928838 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 05:19:40.939214 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 05:19:40.939668 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 05:19:40.946579 ignition[959]: INFO : Ignition 2.19.0 May 8 05:19:40.946579 ignition[959]: INFO : Stage: umount May 8 05:19:40.947985 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 05:19:40.947985 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 05:19:40.951974 ignition[959]: INFO : umount: umount passed May 8 05:19:40.951974 ignition[959]: INFO : Ignition finished successfully May 8 05:19:40.949798 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 05:19:40.949918 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 05:19:40.953069 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 05:19:40.953152 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 05:19:40.956060 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 05:19:40.956108 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 05:19:40.957053 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 05:19:40.957095 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 05:19:40.959399 systemd[1]: Stopped target network.target - Network. May 8 05:19:40.960470 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 05:19:40.960517 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 05:19:40.961979 systemd[1]: Stopped target paths.target - Path Units. May 8 05:19:40.962925 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 05:19:40.966732 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 05:19:40.967900 systemd[1]: Stopped target slices.target - Slice Units. May 8 05:19:40.969125 systemd[1]: Stopped target sockets.target - Socket Units. May 8 05:19:40.970446 systemd[1]: iscsid.socket: Deactivated successfully. May 8 05:19:40.970549 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 05:19:40.971534 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 05:19:40.971566 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 05:19:40.972548 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 05:19:40.972598 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 05:19:40.973658 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 05:19:40.973709 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 05:19:40.975027 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 05:19:40.976040 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 05:19:40.978096 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 05:19:40.978699 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 05:19:40.978790 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 05:19:40.978829 systemd-networkd[711]: eth0: DHCPv6 lease lost May 8 05:19:40.980563 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 05:19:40.980671 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 05:19:40.981891 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 05:19:40.981987 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 05:19:40.985010 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 05:19:40.985059 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 05:19:40.991808 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 05:19:40.992981 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 05:19:40.993047 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 05:19:40.993743 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 05:19:40.995005 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 05:19:40.995112 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 05:19:41.000321 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 05:19:41.000446 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 05:19:41.002139 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 05:19:41.002188 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 05:19:41.004125 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 05:19:41.004184 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 05:19:41.004988 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 05:19:41.005120 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 05:19:41.007937 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 05:19:41.008030 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 05:19:41.009415 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 05:19:41.009481 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 05:19:41.010823 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 05:19:41.010855 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 05:19:41.012002 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 05:19:41.012049 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 05:19:41.013587 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 05:19:41.013656 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 05:19:41.014907 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 05:19:41.014952 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 05:19:41.023772 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 05:19:41.025947 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 05:19:41.026002 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 05:19:41.027239 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 05:19:41.027285 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 05:19:41.029005 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 05:19:41.029045 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 05:19:41.031940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 05:19:41.031986 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 05:19:41.033607 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 05:19:41.033721 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 05:19:41.034923 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 05:19:41.041819 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 05:19:41.049686 systemd[1]: Switching root. May 8 05:19:41.085106 systemd-journald[184]: Journal stopped May 8 05:19:43.000916 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). May 8 05:19:43.000979 kernel: SELinux: policy capability network_peer_controls=1 May 8 05:19:43.001000 kernel: SELinux: policy capability open_perms=1 May 8 05:19:43.001012 kernel: SELinux: policy capability extended_socket_class=1 May 8 05:19:43.001024 kernel: SELinux: policy capability always_check_network=0 May 8 05:19:43.001035 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 05:19:43.001047 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 05:19:43.001059 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 05:19:43.001074 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 05:19:43.001086 kernel: audit: type=1403 audit(1746681581.841:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 05:19:43.001099 systemd[1]: Successfully loaded SELinux policy in 77.852ms. May 8 05:19:43.004037 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.515ms. May 8 05:19:43.004060 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 05:19:43.004073 systemd[1]: Detected virtualization kvm. May 8 05:19:43.004085 systemd[1]: Detected architecture x86-64. May 8 05:19:43.004097 systemd[1]: Detected first boot. May 8 05:19:43.004113 systemd[1]: Hostname set to . May 8 05:19:43.004125 systemd[1]: Initializing machine ID from VM UUID. May 8 05:19:43.004136 zram_generator::config[1002]: No configuration found. May 8 05:19:43.004149 systemd[1]: Populated /etc with preset unit settings. May 8 05:19:43.004161 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 05:19:43.004172 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 05:19:43.004184 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 05:19:43.004196 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 05:19:43.004208 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 05:19:43.004222 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 05:19:43.004234 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 05:19:43.004245 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 05:19:43.004259 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 05:19:43.004272 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 05:19:43.004289 systemd[1]: Created slice user.slice - User and Session Slice. May 8 05:19:43.004301 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 05:19:43.004314 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 05:19:43.004328 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 05:19:43.004341 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 05:19:43.004353 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 05:19:43.004366 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 05:19:43.004379 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 05:19:43.004391 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 05:19:43.004404 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 05:19:43.004417 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 05:19:43.004431 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 05:19:43.004444 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 05:19:43.004457 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 05:19:43.004469 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 05:19:43.004481 systemd[1]: Reached target slices.target - Slice Units. May 8 05:19:43.004494 systemd[1]: Reached target swap.target - Swaps. May 8 05:19:43.004506 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 05:19:43.004519 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 05:19:43.004533 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 05:19:43.004546 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 05:19:43.004558 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 05:19:43.004570 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 05:19:43.004583 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 05:19:43.004596 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 05:19:43.004608 systemd[1]: Mounting media.mount - External Media Directory... May 8 05:19:43.005696 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 05:19:43.005728 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 05:19:43.005741 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 05:19:43.005754 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 05:19:43.005768 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 05:19:43.005781 systemd[1]: Reached target machines.target - Containers. May 8 05:19:43.005793 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 05:19:43.005806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 05:19:43.005818 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 05:19:43.005831 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 05:19:43.005848 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 05:19:43.005861 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 05:19:43.005874 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 05:19:43.005886 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 05:19:43.005899 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 05:19:43.005911 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 05:19:43.005924 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 05:19:43.005936 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 05:19:43.005951 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 05:19:43.005963 systemd[1]: Stopped systemd-fsck-usr.service. May 8 05:19:43.005975 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 05:19:43.005988 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 05:19:43.006001 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 05:19:43.006013 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 05:19:43.006026 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 05:19:43.006039 systemd[1]: verity-setup.service: Deactivated successfully. May 8 05:19:43.006051 systemd[1]: Stopped verity-setup.service. May 8 05:19:43.006066 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 05:19:43.006079 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 05:19:43.006092 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 05:19:43.006104 systemd[1]: Mounted media.mount - External Media Directory. May 8 05:19:43.006117 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 05:19:43.006129 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 05:19:43.006144 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 05:19:43.006157 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 05:19:43.006169 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 05:19:43.006182 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 05:19:43.006194 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 05:19:43.006207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 05:19:43.006223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 05:19:43.006240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 05:19:43.006253 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 05:19:43.006265 kernel: loop: module loaded May 8 05:19:43.006280 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 05:19:43.006317 systemd-journald[1098]: Collecting audit messages is disabled. May 8 05:19:43.006351 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 05:19:43.006364 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 05:19:43.006377 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 05:19:43.006390 kernel: ACPI: bus type drm_connector registered May 8 05:19:43.006402 kernel: fuse: init (API version 7.39) May 8 05:19:43.006414 systemd-journald[1098]: Journal started May 8 05:19:43.006446 systemd-journald[1098]: Runtime Journal (/run/log/journal/7e617e5c8ab94ad3a94a04e42ff615db) is 8.0M, max 78.3M, 70.3M free. May 8 05:19:43.011450 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 05:19:42.614019 systemd[1]: Queued start job for default target multi-user.target. May 8 05:19:42.637894 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 8 05:19:42.638281 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 05:19:43.019688 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 05:19:43.019758 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 05:19:43.019774 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 05:19:43.030718 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 05:19:43.036833 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 05:19:43.040668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 05:19:43.052692 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 05:19:43.052761 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 05:19:43.064694 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 05:19:43.080726 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 05:19:43.088643 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 05:19:43.101836 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 05:19:43.108540 systemd[1]: Started systemd-journald.service - Journal Service. May 8 05:19:43.110434 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 05:19:43.111712 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 05:19:43.112494 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 05:19:43.112680 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 05:19:43.114059 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 05:19:43.114234 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 05:19:43.119450 kernel: loop0: detected capacity change from 0 to 140768 May 8 05:19:43.116277 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 05:19:43.121045 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 05:19:43.121816 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 05:19:43.123209 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 05:19:43.138725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 05:19:43.156049 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 05:19:43.169538 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 05:19:43.179336 systemd-tmpfiles[1119]: ACLs are not supported, ignoring. May 8 05:19:43.189084 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 05:19:43.179352 systemd-tmpfiles[1119]: ACLs are not supported, ignoring. May 8 05:19:43.179356 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 05:19:43.187437 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 05:19:43.188096 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 05:19:43.191392 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 05:19:43.193599 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 05:19:43.203746 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 05:19:43.207701 systemd-journald[1098]: Time spent on flushing to /var/log/journal/7e617e5c8ab94ad3a94a04e42ff615db is 32.154ms for 957 entries. May 8 05:19:43.207701 systemd-journald[1098]: System Journal (/var/log/journal/7e617e5c8ab94ad3a94a04e42ff615db) is 8.0M, max 584.8M, 576.8M free. May 8 05:19:43.308010 systemd-journald[1098]: Received client request to flush runtime journal. May 8 05:19:43.308066 kernel: loop1: detected capacity change from 0 to 142488 May 8 05:19:43.213783 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 05:19:43.229862 udevadm[1149]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 05:19:43.310870 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 05:19:43.323020 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 05:19:43.323526 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 05:19:43.343702 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 05:19:43.351842 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 05:19:43.358008 kernel: loop2: detected capacity change from 0 to 210664 May 8 05:19:43.373930 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. May 8 05:19:43.373950 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. May 8 05:19:43.379378 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 05:19:43.420023 kernel: loop3: detected capacity change from 0 to 8 May 8 05:19:43.446904 kernel: loop4: detected capacity change from 0 to 140768 May 8 05:19:43.500668 kernel: loop5: detected capacity change from 0 to 142488 May 8 05:19:43.540664 kernel: loop6: detected capacity change from 0 to 210664 May 8 05:19:43.603836 kernel: loop7: detected capacity change from 0 to 8 May 8 05:19:43.604520 (sd-merge)[1164]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 8 05:19:43.605842 (sd-merge)[1164]: Merged extensions into '/usr'. May 8 05:19:43.610854 systemd[1]: Reloading requested from client PID 1118 ('systemd-sysext') (unit systemd-sysext.service)... May 8 05:19:43.610873 systemd[1]: Reloading... May 8 05:19:43.722657 zram_generator::config[1190]: No configuration found. May 8 05:19:43.906931 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 05:19:43.965646 systemd[1]: Reloading finished in 354 ms. May 8 05:19:44.014409 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 05:19:44.025861 systemd[1]: Starting ensure-sysext.service... May 8 05:19:44.027773 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 05:19:44.029601 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 05:19:44.034844 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 05:19:44.047406 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... May 8 05:19:44.047427 systemd[1]: Reloading... May 8 05:19:44.062190 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 05:19:44.063134 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 05:19:44.065483 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 05:19:44.066863 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 8 05:19:44.067315 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. May 8 05:19:44.077584 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 8 05:19:44.077598 systemd-tmpfiles[1246]: Skipping /boot May 8 05:19:44.097566 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. May 8 05:19:44.097582 systemd-tmpfiles[1246]: Skipping /boot May 8 05:19:44.111197 ldconfig[1113]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 05:19:44.118637 systemd-udevd[1248]: Using default interface naming scheme 'v255'. May 8 05:19:44.127650 zram_generator::config[1273]: No configuration found. May 8 05:19:44.296652 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1340) May 8 05:19:44.325646 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 05:19:44.329674 kernel: ACPI: button: Power Button [PWRF] May 8 05:19:44.343714 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 8 05:19:44.344480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 05:19:44.431277 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 05:19:44.443273 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 8 05:19:44.443346 kernel: mousedev: PS/2 mouse device common for all mice May 8 05:19:44.443363 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 8 05:19:44.448436 kernel: Console: switching to colour dummy device 80x25 May 8 05:19:44.449737 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 8 05:19:44.449783 kernel: [drm] features: -context_init May 8 05:19:44.456654 kernel: [drm] number of scanouts: 1 May 8 05:19:44.458652 kernel: [drm] number of cap sets: 0 May 8 05:19:44.463063 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 8 05:19:44.474155 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 8 05:19:44.474264 kernel: Console: switching to colour frame buffer device 160x50 May 8 05:19:44.474226 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 05:19:44.474520 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 8 05:19:44.482557 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 8 05:19:44.482808 systemd[1]: Reloading finished in 435 ms. May 8 05:19:44.497919 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 05:19:44.500734 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 05:19:44.504970 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 05:19:44.529061 systemd[1]: Finished ensure-sysext.service. May 8 05:19:44.543457 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 05:19:44.551580 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 05:19:44.557797 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 05:19:44.563798 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 05:19:44.564046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 05:19:44.566897 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 05:19:44.572879 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 05:19:44.575124 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 05:19:44.578796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 05:19:44.581728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 05:19:44.582614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 05:19:44.584852 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 05:19:44.589791 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 05:19:44.592785 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 05:19:44.596831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 05:19:44.606832 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 8 05:19:44.609695 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 05:19:44.617768 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 05:19:44.617868 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 05:19:44.618404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 05:19:44.619349 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 05:19:44.622778 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 05:19:44.625779 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 05:19:44.640237 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 05:19:44.640725 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 05:19:44.642669 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 05:19:44.646275 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 05:19:44.651093 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 05:19:44.653231 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 05:19:44.655940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 05:19:44.663683 lvm[1370]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 05:19:44.664339 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 05:19:44.687711 augenrules[1404]: No rules May 8 05:19:44.688322 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 05:19:44.693504 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 05:19:44.699572 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 05:19:44.705243 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 05:19:44.717816 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 05:19:44.733093 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 05:19:44.735661 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 05:19:44.748825 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 05:19:44.750723 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 05:19:44.765192 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 05:19:44.777020 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 05:19:44.786736 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 05:19:44.820686 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 05:19:44.821563 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 05:19:44.851142 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 8 05:19:44.853211 systemd[1]: Reached target time-set.target - System Time Set. May 8 05:19:44.869372 systemd-networkd[1382]: lo: Link UP May 8 05:19:44.869384 systemd-networkd[1382]: lo: Gained carrier May 8 05:19:44.870776 systemd-networkd[1382]: Enumeration completed May 8 05:19:44.870895 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 05:19:44.874919 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 05:19:44.874929 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 05:19:44.875613 systemd-networkd[1382]: eth0: Link UP May 8 05:19:44.875617 systemd-networkd[1382]: eth0: Gained carrier May 8 05:19:44.875656 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 05:19:44.879071 systemd-resolved[1385]: Positive Trust Anchors: May 8 05:19:44.879415 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 05:19:44.879467 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 05:19:44.884002 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 05:19:44.884618 systemd-resolved[1385]: Using system hostname 'ci-4081-3-3-n-e0f469a76e.novalocal'. May 8 05:19:44.888573 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 05:19:44.889369 systemd[1]: Reached target network.target - Network. May 8 05:19:44.889935 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 05:19:44.890427 systemd[1]: Reached target sysinit.target - System Initialization. May 8 05:19:44.893907 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 05:19:44.894530 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 05:19:44.897502 systemd-networkd[1382]: eth0: DHCPv4 address 172.24.4.234/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 8 05:19:44.898432 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. May 8 05:19:44.898788 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 05:19:44.900536 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 05:19:44.902077 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 05:19:44.902865 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 05:19:44.902980 systemd[1]: Reached target paths.target - Path Units. May 8 05:19:44.903893 systemd[1]: Reached target timers.target - Timer Units. May 8 05:19:44.906361 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 05:19:44.909783 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 05:19:44.919565 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 05:19:44.921495 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 05:19:44.923957 systemd[1]: Reached target sockets.target - Socket Units. May 8 05:19:44.925872 systemd[1]: Reached target basic.target - Basic System. May 8 05:19:44.927967 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 05:19:44.928001 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 05:19:44.932781 systemd[1]: Starting containerd.service - containerd container runtime... May 8 05:19:44.936574 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 05:19:44.949929 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 05:19:44.954418 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 05:19:44.958730 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 05:19:44.960450 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 05:19:44.964783 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 05:19:44.972605 jq[1437]: false May 8 05:19:44.969791 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 05:19:44.982608 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 05:19:44.989831 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 05:19:45.003838 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 05:19:45.005072 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 05:19:45.005607 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 05:19:45.008809 systemd[1]: Starting update-engine.service - Update Engine... May 8 05:19:45.019767 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 05:19:45.027098 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 05:19:45.027735 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 05:19:45.029155 jq[1447]: true May 8 05:19:45.031016 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 05:19:45.031841 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 05:19:45.054919 jq[1451]: true May 8 05:19:45.073145 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 05:19:45.082717 update_engine[1446]: I20250508 05:19:45.082651 1446 main.cc:92] Flatcar Update Engine starting May 8 05:19:45.093729 extend-filesystems[1438]: Found loop4 May 8 05:19:45.093729 extend-filesystems[1438]: Found loop5 May 8 05:19:45.093729 extend-filesystems[1438]: Found loop6 May 8 05:19:45.093729 extend-filesystems[1438]: Found loop7 May 8 05:19:45.093729 extend-filesystems[1438]: Found vda May 8 05:19:45.093729 extend-filesystems[1438]: Found vda1 May 8 05:19:45.093729 extend-filesystems[1438]: Found vda2 May 8 05:19:45.093729 extend-filesystems[1438]: Found vda3 May 8 05:19:45.093729 extend-filesystems[1438]: Found usr May 8 05:19:45.093729 extend-filesystems[1438]: Found vda4 May 8 05:19:45.093729 extend-filesystems[1438]: Found vda6 May 8 05:19:45.093729 extend-filesystems[1438]: Found vda7 May 8 05:19:45.093729 extend-filesystems[1438]: Found vda9 May 8 05:19:45.093729 extend-filesystems[1438]: Checking size of /dev/vda9 May 8 05:19:45.100778 systemd[1]: motdgen.service: Deactivated successfully. May 8 05:19:45.127198 tar[1449]: linux-amd64/helm May 8 05:19:45.127427 update_engine[1446]: I20250508 05:19:45.109443 1446 update_check_scheduler.cc:74] Next update check in 2m45s May 8 05:19:45.106740 dbus-daemon[1434]: [system] SELinux support is enabled May 8 05:19:45.100973 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 05:19:45.107184 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 05:19:45.120101 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 05:19:45.120132 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 05:19:45.122096 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 05:19:45.122126 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 05:19:45.122747 systemd[1]: Started update-engine.service - Update Engine. May 8 05:19:45.141836 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 05:19:45.148129 systemd-logind[1443]: New seat seat0. May 8 05:19:45.157608 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) May 8 05:19:45.157656 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 05:19:45.159185 systemd[1]: Started systemd-logind.service - User Login Management. May 8 05:19:45.169054 extend-filesystems[1438]: Resized partition /dev/vda9 May 8 05:19:45.189137 extend-filesystems[1489]: resize2fs 1.47.1 (20-May-2024) May 8 05:19:45.212996 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 8 05:19:45.227702 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 8 05:19:45.320316 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1338) May 8 05:19:45.312800 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 05:19:45.327553 bash[1487]: Updated "/home/core/.ssh/authorized_keys" May 8 05:19:45.328940 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 05:19:45.332112 extend-filesystems[1489]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 05:19:45.332112 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 05:19:45.332112 extend-filesystems[1489]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 8 05:19:45.349868 extend-filesystems[1438]: Resized filesystem in /dev/vda9 May 8 05:19:45.335498 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 05:19:45.338850 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 05:19:45.354958 systemd[1]: Starting sshkeys.service... May 8 05:19:45.389892 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 05:19:45.400998 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 05:19:45.587607 containerd[1462]: time="2025-05-08T05:19:45.587502356Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 8 05:19:45.633643 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 05:19:45.634993 containerd[1462]: time="2025-05-08T05:19:45.634953109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 05:19:45.637790 containerd[1462]: time="2025-05-08T05:19:45.637741459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 05:19:45.637790 containerd[1462]: time="2025-05-08T05:19:45.637780793Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 05:19:45.637868 containerd[1462]: time="2025-05-08T05:19:45.637800359Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 05:19:45.637989 containerd[1462]: time="2025-05-08T05:19:45.637962604Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 05:19:45.638025 containerd[1462]: time="2025-05-08T05:19:45.637987811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 05:19:45.638076 containerd[1462]: time="2025-05-08T05:19:45.638049617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 05:19:45.638076 containerd[1462]: time="2025-05-08T05:19:45.638072029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 05:19:45.638257 containerd[1462]: time="2025-05-08T05:19:45.638228182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 05:19:45.638257 containerd[1462]: time="2025-05-08T05:19:45.638251255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 05:19:45.638318 containerd[1462]: time="2025-05-08T05:19:45.638272605Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 05:19:45.638318 containerd[1462]: time="2025-05-08T05:19:45.638285399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 05:19:45.638386 containerd[1462]: time="2025-05-08T05:19:45.638362524Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 05:19:45.638599 containerd[1462]: time="2025-05-08T05:19:45.638573710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 05:19:45.638729 containerd[1462]: time="2025-05-08T05:19:45.638703503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 05:19:45.638729 containerd[1462]: time="2025-05-08T05:19:45.638725094Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 05:19:45.638831 containerd[1462]: time="2025-05-08T05:19:45.638807218Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 05:19:45.638885 containerd[1462]: time="2025-05-08T05:19:45.638864465Z" level=info msg="metadata content store policy set" policy=shared May 8 05:19:45.656133 containerd[1462]: time="2025-05-08T05:19:45.656015382Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 05:19:45.656133 containerd[1462]: time="2025-05-08T05:19:45.656073491Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 05:19:45.656133 containerd[1462]: time="2025-05-08T05:19:45.656093949Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 05:19:45.656133 containerd[1462]: time="2025-05-08T05:19:45.656112794Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 05:19:45.656133 containerd[1462]: time="2025-05-08T05:19:45.656130518Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 05:19:45.656133 containerd[1462]: time="2025-05-08T05:19:45.656264890Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 05:19:45.656959 containerd[1462]: time="2025-05-08T05:19:45.656563670Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 05:19:45.657050 containerd[1462]: time="2025-05-08T05:19:45.657032549Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 05:19:45.657207 containerd[1462]: time="2025-05-08T05:19:45.657122208Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 05:19:45.657207 containerd[1462]: time="2025-05-08T05:19:45.657152925Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 05:19:45.657207 containerd[1462]: time="2025-05-08T05:19:45.657175608Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 05:19:45.657314 containerd[1462]: time="2025-05-08T05:19:45.657190466Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 05:19:45.657440 containerd[1462]: time="2025-05-08T05:19:45.657359643Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 05:19:45.657440 containerd[1462]: time="2025-05-08T05:19:45.657381905Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 05:19:45.657440 containerd[1462]: time="2025-05-08T05:19:45.657401962Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 05:19:45.657440 containerd[1462]: time="2025-05-08T05:19:45.657416570Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 05:19:45.657567 containerd[1462]: time="2025-05-08T05:19:45.657552304Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 05:19:45.657765 containerd[1462]: time="2025-05-08T05:19:45.657641532Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 05:19:45.657765 containerd[1462]: time="2025-05-08T05:19:45.657670977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 05:19:45.657765 containerd[1462]: time="2025-05-08T05:19:45.657686746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 05:19:45.657765 containerd[1462]: time="2025-05-08T05:19:45.657699771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 05:19:45.657765 containerd[1462]: time="2025-05-08T05:19:45.657714308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 05:19:45.657765 containerd[1462]: time="2025-05-08T05:19:45.657726912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 05:19:45.657765 containerd[1462]: time="2025-05-08T05:19:45.657741740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 05:19:45.657956 containerd[1462]: time="2025-05-08T05:19:45.657939771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658002869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658021334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658037675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658050268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658063283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658076898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658092818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658114198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658126671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658138854Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658182516Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658203756Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 05:19:45.658313 containerd[1462]: time="2025-05-08T05:19:45.658215518Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 05:19:45.658590 containerd[1462]: time="2025-05-08T05:19:45.658229104Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 05:19:45.658590 containerd[1462]: time="2025-05-08T05:19:45.658240264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 05:19:45.658590 containerd[1462]: time="2025-05-08T05:19:45.658252598Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 05:19:45.658590 containerd[1462]: time="2025-05-08T05:19:45.658263218Z" level=info msg="NRI interface is disabled by configuration." May 8 05:19:45.658590 containerd[1462]: time="2025-05-08T05:19:45.658274108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 05:19:45.659385 containerd[1462]: time="2025-05-08T05:19:45.659032460Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 05:19:45.659526 containerd[1462]: time="2025-05-08T05:19:45.659398387Z" level=info msg="Connect containerd service" May 8 05:19:45.659526 containerd[1462]: time="2025-05-08T05:19:45.659454752Z" level=info msg="using legacy CRI server" May 8 05:19:45.659526 containerd[1462]: time="2025-05-08T05:19:45.659465783Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 05:19:45.660121 containerd[1462]: time="2025-05-08T05:19:45.659597570Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 05:19:45.660305 containerd[1462]: time="2025-05-08T05:19:45.660270192Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 05:19:45.660897 containerd[1462]: time="2025-05-08T05:19:45.660861871Z" level=info msg="Start subscribing containerd event" May 8 05:19:45.661006 containerd[1462]: time="2025-05-08T05:19:45.660984111Z" level=info msg="Start recovering state" May 8 05:19:45.661137 containerd[1462]: time="2025-05-08T05:19:45.661121007Z" level=info msg="Start event monitor" May 8 05:19:45.661224 containerd[1462]: time="2025-05-08T05:19:45.661207820Z" level=info msg="Start snapshots syncer" May 8 05:19:45.661290 containerd[1462]: time="2025-05-08T05:19:45.661275968Z" level=info msg="Start cni network conf syncer for default" May 8 05:19:45.661350 containerd[1462]: time="2025-05-08T05:19:45.661337303Z" level=info msg="Start streaming server" May 8 05:19:45.662923 containerd[1462]: time="2025-05-08T05:19:45.662767525Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 05:19:45.662923 containerd[1462]: time="2025-05-08T05:19:45.662883513Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 05:19:45.663114 systemd[1]: Started containerd.service - containerd container runtime. May 8 05:19:45.667654 containerd[1462]: time="2025-05-08T05:19:45.666686616Z" level=info msg="containerd successfully booted in 0.080549s" May 8 05:19:45.676974 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 05:19:45.687062 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 05:19:45.709343 systemd[1]: issuegen.service: Deactivated successfully. May 8 05:19:45.709764 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 05:19:45.723432 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 05:19:45.739411 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 05:19:45.747947 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 05:19:45.760969 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 05:19:45.761830 systemd[1]: Reached target getty.target - Login Prompts. May 8 05:19:45.886360 tar[1449]: linux-amd64/LICENSE May 8 05:19:45.886511 tar[1449]: linux-amd64/README.md May 8 05:19:45.898267 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 05:19:46.134047 systemd-networkd[1382]: eth0: Gained IPv6LL May 8 05:19:46.135121 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. May 8 05:19:46.145163 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 05:19:46.154961 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 05:19:46.160216 systemd[1]: Reached target network-online.target - Network is Online. May 8 05:19:46.175161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:19:46.189365 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 05:19:46.205322 systemd[1]: Started sshd@0-172.24.4.234:22-172.24.4.1:58608.service - OpenSSH per-connection server daemon (172.24.4.1:58608). May 8 05:19:46.265049 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 05:19:47.144693 sshd[1535]: Accepted publickey for core from 172.24.4.1 port 58608 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:19:47.147873 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:19:47.175170 systemd-logind[1443]: New session 1 of user core. May 8 05:19:47.178183 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 05:19:47.193734 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 05:19:47.221721 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 05:19:47.235965 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 05:19:47.240006 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 05:19:47.354901 systemd[1548]: Queued start job for default target default.target. May 8 05:19:47.365537 systemd[1548]: Created slice app.slice - User Application Slice. May 8 05:19:47.365866 systemd[1548]: Reached target paths.target - Paths. May 8 05:19:47.365881 systemd[1548]: Reached target timers.target - Timers. May 8 05:19:47.368748 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 05:19:47.377738 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 05:19:47.378360 systemd[1548]: Reached target sockets.target - Sockets. May 8 05:19:47.378377 systemd[1548]: Reached target basic.target - Basic System. May 8 05:19:47.378412 systemd[1548]: Reached target default.target - Main User Target. May 8 05:19:47.378437 systemd[1548]: Startup finished in 132ms. May 8 05:19:47.379366 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 05:19:47.391092 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 05:19:47.808774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:19:47.817378 (kubelet)[1563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 05:19:47.885051 systemd[1]: Started sshd@1-172.24.4.234:22-172.24.4.1:58612.service - OpenSSH per-connection server daemon (172.24.4.1:58612). May 8 05:19:49.175568 kubelet[1563]: E0508 05:19:49.175394 1563 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 05:19:49.180240 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 05:19:49.180579 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 05:19:49.181187 systemd[1]: kubelet.service: Consumed 1.901s CPU time. May 8 05:19:49.460410 sshd[1565]: Accepted publickey for core from 172.24.4.1 port 58612 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:19:49.463561 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:19:49.473231 systemd-logind[1443]: New session 2 of user core. May 8 05:19:49.485102 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 05:19:50.103121 sshd[1565]: pam_unix(sshd:session): session closed for user core May 8 05:19:50.115673 systemd[1]: sshd@1-172.24.4.234:22-172.24.4.1:58612.service: Deactivated successfully. May 8 05:19:50.119016 systemd[1]: session-2.scope: Deactivated successfully. May 8 05:19:50.122356 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. May 8 05:19:50.128349 systemd[1]: Started sshd@2-172.24.4.234:22-172.24.4.1:58620.service - OpenSSH per-connection server daemon (172.24.4.1:58620). May 8 05:19:50.137932 systemd-logind[1443]: Removed session 2. May 8 05:19:50.802929 login[1525]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 05:19:50.813729 systemd-logind[1443]: New session 3 of user core. May 8 05:19:50.820007 login[1526]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 8 05:19:50.826300 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 05:19:50.841206 systemd-logind[1443]: New session 4 of user core. May 8 05:19:50.849253 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 05:19:51.274088 sshd[1582]: Accepted publickey for core from 172.24.4.1 port 58620 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:19:51.277143 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:19:51.287433 systemd-logind[1443]: New session 5 of user core. May 8 05:19:51.297311 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 05:19:52.011850 coreos-metadata[1433]: May 08 05:19:52.011 WARN failed to locate config-drive, using the metadata service API instead May 8 05:19:52.060262 coreos-metadata[1433]: May 08 05:19:52.060 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 8 05:19:52.071106 sshd[1582]: pam_unix(sshd:session): session closed for user core May 8 05:19:52.077157 systemd[1]: sshd@2-172.24.4.234:22-172.24.4.1:58620.service: Deactivated successfully. May 8 05:19:52.081161 systemd[1]: session-5.scope: Deactivated successfully. May 8 05:19:52.084544 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. May 8 05:19:52.087676 systemd-logind[1443]: Removed session 5. May 8 05:19:52.310802 coreos-metadata[1433]: May 08 05:19:52.310 INFO Fetch successful May 8 05:19:52.310802 coreos-metadata[1433]: May 08 05:19:52.310 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 8 05:19:52.322064 coreos-metadata[1433]: May 08 05:19:52.321 INFO Fetch successful May 8 05:19:52.322064 coreos-metadata[1433]: May 08 05:19:52.321 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 8 05:19:52.337942 coreos-metadata[1433]: May 08 05:19:52.337 INFO Fetch successful May 8 05:19:52.337942 coreos-metadata[1433]: May 08 05:19:52.337 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 8 05:19:52.352106 coreos-metadata[1433]: May 08 05:19:52.352 INFO Fetch successful May 8 05:19:52.352106 coreos-metadata[1433]: May 08 05:19:52.352 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 8 05:19:52.368086 coreos-metadata[1433]: May 08 05:19:52.367 INFO Fetch successful May 8 05:19:52.368086 coreos-metadata[1433]: May 08 05:19:52.368 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 8 05:19:52.381907 coreos-metadata[1433]: May 08 05:19:52.381 INFO Fetch successful May 8 05:19:52.427333 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 05:19:52.430212 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 05:19:52.504030 coreos-metadata[1500]: May 08 05:19:52.503 WARN failed to locate config-drive, using the metadata service API instead May 8 05:19:52.543362 coreos-metadata[1500]: May 08 05:19:52.543 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 8 05:19:52.556770 coreos-metadata[1500]: May 08 05:19:52.556 INFO Fetch successful May 8 05:19:52.556770 coreos-metadata[1500]: May 08 05:19:52.556 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 8 05:19:52.569940 coreos-metadata[1500]: May 08 05:19:52.568 INFO Fetch successful May 8 05:19:52.575744 unknown[1500]: wrote ssh authorized keys file for user: core May 8 05:19:52.617310 update-ssh-keys[1621]: Updated "/home/core/.ssh/authorized_keys" May 8 05:19:52.618323 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 05:19:52.622320 systemd[1]: Finished sshkeys.service. May 8 05:19:52.627140 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 05:19:52.628014 systemd[1]: Startup finished in 1.262s (kernel) + 16.010s (initrd) + 10.862s (userspace) = 28.135s. May 8 05:19:59.209204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 05:19:59.218997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:19:59.520297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:19:59.528865 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 05:19:59.673126 kubelet[1632]: E0508 05:19:59.673033 1632 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 05:19:59.681339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 05:19:59.681875 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 05:20:02.094080 systemd[1]: Started sshd@3-172.24.4.234:22-172.24.4.1:41734.service - OpenSSH per-connection server daemon (172.24.4.1:41734). May 8 05:20:03.580563 sshd[1643]: Accepted publickey for core from 172.24.4.1 port 41734 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:20:03.583273 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:20:03.596277 systemd-logind[1443]: New session 6 of user core. May 8 05:20:03.606027 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 05:20:04.336661 sshd[1643]: pam_unix(sshd:session): session closed for user core May 8 05:20:04.348043 systemd[1]: sshd@3-172.24.4.234:22-172.24.4.1:41734.service: Deactivated successfully. May 8 05:20:04.351805 systemd[1]: session-6.scope: Deactivated successfully. May 8 05:20:04.355938 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. May 8 05:20:04.363177 systemd[1]: Started sshd@4-172.24.4.234:22-172.24.4.1:44852.service - OpenSSH per-connection server daemon (172.24.4.1:44852). May 8 05:20:04.366372 systemd-logind[1443]: Removed session 6. May 8 05:20:06.007097 sshd[1650]: Accepted publickey for core from 172.24.4.1 port 44852 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:20:06.009786 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:20:06.020736 systemd-logind[1443]: New session 7 of user core. May 8 05:20:06.023959 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 05:20:06.793123 sshd[1650]: pam_unix(sshd:session): session closed for user core May 8 05:20:06.804974 systemd[1]: sshd@4-172.24.4.234:22-172.24.4.1:44852.service: Deactivated successfully. May 8 05:20:06.808443 systemd[1]: session-7.scope: Deactivated successfully. May 8 05:20:06.812790 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. May 8 05:20:06.818759 systemd[1]: Started sshd@5-172.24.4.234:22-172.24.4.1:44856.service - OpenSSH per-connection server daemon (172.24.4.1:44856). May 8 05:20:06.821476 systemd-logind[1443]: Removed session 7. May 8 05:20:07.965691 sshd[1657]: Accepted publickey for core from 172.24.4.1 port 44856 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:20:07.969314 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:20:07.980085 systemd-logind[1443]: New session 8 of user core. May 8 05:20:07.986952 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 05:20:08.601466 sshd[1657]: pam_unix(sshd:session): session closed for user core May 8 05:20:08.613159 systemd[1]: sshd@5-172.24.4.234:22-172.24.4.1:44856.service: Deactivated successfully. May 8 05:20:08.616234 systemd[1]: session-8.scope: Deactivated successfully. May 8 05:20:08.620023 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. May 8 05:20:08.625211 systemd[1]: Started sshd@6-172.24.4.234:22-172.24.4.1:44866.service - OpenSSH per-connection server daemon (172.24.4.1:44866). May 8 05:20:08.628298 systemd-logind[1443]: Removed session 8. May 8 05:20:09.709089 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 05:20:09.717039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:20:09.790305 sshd[1664]: Accepted publickey for core from 172.24.4.1 port 44866 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:20:09.793029 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:20:09.802596 systemd-logind[1443]: New session 9 of user core. May 8 05:20:09.814933 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 05:20:10.061092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:20:10.072255 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 05:20:10.158646 kubelet[1675]: E0508 05:20:10.158510 1675 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 05:20:10.163280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 05:20:10.163683 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 05:20:10.179131 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 05:20:10.179441 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 05:20:10.195294 sudo[1682]: pam_unix(sudo:session): session closed for user root May 8 05:20:10.393598 sshd[1664]: pam_unix(sshd:session): session closed for user core May 8 05:20:10.405472 systemd[1]: sshd@6-172.24.4.234:22-172.24.4.1:44866.service: Deactivated successfully. May 8 05:20:10.408933 systemd[1]: session-9.scope: Deactivated successfully. May 8 05:20:10.410785 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. May 8 05:20:10.421182 systemd[1]: Started sshd@7-172.24.4.234:22-172.24.4.1:44874.service - OpenSSH per-connection server daemon (172.24.4.1:44874). May 8 05:20:10.423402 systemd-logind[1443]: Removed session 9. May 8 05:20:11.531715 sshd[1688]: Accepted publickey for core from 172.24.4.1 port 44874 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:20:11.534537 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:20:11.543776 systemd-logind[1443]: New session 10 of user core. May 8 05:20:11.552931 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 05:20:12.008552 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 05:20:12.009247 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 05:20:12.016872 sudo[1692]: pam_unix(sudo:session): session closed for user root May 8 05:20:12.028933 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 8 05:20:12.030219 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 05:20:12.058221 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 8 05:20:12.070802 auditctl[1695]: No rules May 8 05:20:12.071785 systemd[1]: audit-rules.service: Deactivated successfully. May 8 05:20:12.072195 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 8 05:20:12.081450 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 8 05:20:12.140576 augenrules[1713]: No rules May 8 05:20:12.143531 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 8 05:20:12.145900 sudo[1691]: pam_unix(sudo:session): session closed for user root May 8 05:20:12.327438 sshd[1688]: pam_unix(sshd:session): session closed for user core May 8 05:20:12.340146 systemd[1]: sshd@7-172.24.4.234:22-172.24.4.1:44874.service: Deactivated successfully. May 8 05:20:12.343073 systemd[1]: session-10.scope: Deactivated successfully. May 8 05:20:12.345904 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. May 8 05:20:12.354670 systemd[1]: Started sshd@8-172.24.4.234:22-172.24.4.1:44884.service - OpenSSH per-connection server daemon (172.24.4.1:44884). May 8 05:20:12.357416 systemd-logind[1443]: Removed session 10. May 8 05:20:13.467374 sshd[1721]: Accepted publickey for core from 172.24.4.1 port 44884 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:20:13.470290 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:20:13.481430 systemd-logind[1443]: New session 11 of user core. May 8 05:20:13.492012 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 05:20:13.943963 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 05:20:13.944593 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 05:20:14.578998 (dockerd)[1739]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 05:20:14.580059 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 05:20:15.205823 dockerd[1739]: time="2025-05-08T05:20:15.205750042Z" level=info msg="Starting up" May 8 05:20:15.390777 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3095772247-merged.mount: Deactivated successfully. May 8 05:20:15.455190 dockerd[1739]: time="2025-05-08T05:20:15.455122019Z" level=info msg="Loading containers: start." May 8 05:20:15.609686 kernel: Initializing XFRM netlink socket May 8 05:20:15.640847 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. May 8 05:20:15.704409 systemd-networkd[1382]: docker0: Link UP May 8 05:20:15.723990 dockerd[1739]: time="2025-05-08T05:20:15.723935548Z" level=info msg="Loading containers: done." May 8 05:20:15.749129 dockerd[1739]: time="2025-05-08T05:20:15.749060170Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 05:20:15.749399 dockerd[1739]: time="2025-05-08T05:20:15.749171859Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 8 05:20:15.749399 dockerd[1739]: time="2025-05-08T05:20:15.749276806Z" level=info msg="Daemon has completed initialization" May 8 05:20:16.488572 systemd-timesyncd[1386]: Contacted time server 72.46.61.205:123 (2.flatcar.pool.ntp.org). May 8 05:20:16.488627 systemd-timesyncd[1386]: Initial clock synchronization to Thu 2025-05-08 05:20:16.488351 UTC. May 8 05:20:16.488672 systemd-resolved[1385]: Clock change detected. Flushing caches. May 8 05:20:16.518211 dockerd[1739]: time="2025-05-08T05:20:16.518152872Z" level=info msg="API listen on /run/docker.sock" May 8 05:20:16.519319 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 05:20:18.209526 containerd[1462]: time="2025-05-08T05:20:18.209250395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 8 05:20:18.943377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771867342.mount: Deactivated successfully. May 8 05:20:20.933636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 05:20:20.940284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:20:21.107516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:20:21.118248 (kubelet)[1944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 05:20:21.172726 kubelet[1944]: E0508 05:20:21.172637 1944 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 05:20:21.176678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 05:20:21.177032 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 05:20:21.303284 containerd[1462]: time="2025-05-08T05:20:21.302379017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:21.305873 containerd[1462]: time="2025-05-08T05:20:21.305760699Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674881" May 8 05:20:21.307913 containerd[1462]: time="2025-05-08T05:20:21.307800705Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:21.315353 containerd[1462]: time="2025-05-08T05:20:21.315190735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:21.318683 containerd[1462]: time="2025-05-08T05:20:21.318325304Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 3.109021629s" May 8 05:20:21.318683 containerd[1462]: time="2025-05-08T05:20:21.318407579Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 8 05:20:21.370743 containerd[1462]: time="2025-05-08T05:20:21.370331001Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 8 05:20:23.572033 containerd[1462]: time="2025-05-08T05:20:23.571922856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:23.574003 containerd[1462]: time="2025-05-08T05:20:23.573489184Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617542" May 8 05:20:23.575525 containerd[1462]: time="2025-05-08T05:20:23.575473084Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:23.578812 containerd[1462]: time="2025-05-08T05:20:23.578764638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:23.580090 containerd[1462]: time="2025-05-08T05:20:23.579889968Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.209494386s" May 8 05:20:23.580090 containerd[1462]: time="2025-05-08T05:20:23.579954820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 8 05:20:23.603992 containerd[1462]: time="2025-05-08T05:20:23.603841700Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 8 05:20:25.565064 containerd[1462]: time="2025-05-08T05:20:25.565005041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:25.567035 containerd[1462]: time="2025-05-08T05:20:25.566983041Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903690" May 8 05:20:25.568691 containerd[1462]: time="2025-05-08T05:20:25.568646080Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:25.572007 containerd[1462]: time="2025-05-08T05:20:25.571933385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:25.573158 containerd[1462]: time="2025-05-08T05:20:25.573038919Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.969160941s" May 8 05:20:25.573158 containerd[1462]: time="2025-05-08T05:20:25.573072942Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 8 05:20:25.595500 containerd[1462]: time="2025-05-08T05:20:25.595407992Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 05:20:26.969062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4047467534.mount: Deactivated successfully. May 8 05:20:27.459058 containerd[1462]: time="2025-05-08T05:20:27.459002142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:27.460300 containerd[1462]: time="2025-05-08T05:20:27.460251616Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185825" May 8 05:20:27.461716 containerd[1462]: time="2025-05-08T05:20:27.461674634Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:27.464350 containerd[1462]: time="2025-05-08T05:20:27.464312091Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:27.465101 containerd[1462]: time="2025-05-08T05:20:27.464959545Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.869477906s" May 8 05:20:27.465101 containerd[1462]: time="2025-05-08T05:20:27.465006433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 05:20:27.488561 containerd[1462]: time="2025-05-08T05:20:27.488510555Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 05:20:28.165523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2991278957.mount: Deactivated successfully. May 8 05:20:30.090872 containerd[1462]: time="2025-05-08T05:20:30.090820166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:30.093014 containerd[1462]: time="2025-05-08T05:20:30.092958877Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 8 05:20:30.094628 containerd[1462]: time="2025-05-08T05:20:30.094586420Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:30.097831 containerd[1462]: time="2025-05-08T05:20:30.097779749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:30.099257 containerd[1462]: time="2025-05-08T05:20:30.098857049Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.610309745s" May 8 05:20:30.099257 containerd[1462]: time="2025-05-08T05:20:30.098890733Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 8 05:20:30.120831 containerd[1462]: time="2025-05-08T05:20:30.120800033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 8 05:20:30.696615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652273915.mount: Deactivated successfully. May 8 05:20:30.709299 containerd[1462]: time="2025-05-08T05:20:30.709180952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:30.711792 containerd[1462]: time="2025-05-08T05:20:30.711159022Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" May 8 05:20:30.713523 containerd[1462]: time="2025-05-08T05:20:30.713316969Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:30.721300 containerd[1462]: time="2025-05-08T05:20:30.720475154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:30.725584 containerd[1462]: time="2025-05-08T05:20:30.724643582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 603.618667ms" May 8 05:20:30.725584 containerd[1462]: time="2025-05-08T05:20:30.724719845Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 8 05:20:30.775000 containerd[1462]: time="2025-05-08T05:20:30.774167634Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 8 05:20:31.183552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 8 05:20:31.192252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:20:31.222033 update_engine[1446]: I20250508 05:20:31.221112 1446 update_attempter.cc:509] Updating boot flags... May 8 05:20:31.515058 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2053) May 8 05:20:31.598556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:20:31.602951 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 05:20:31.629503 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2057) May 8 05:20:31.667799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342352255.mount: Deactivated successfully. May 8 05:20:31.709419 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2057) May 8 05:20:31.719061 kubelet[2065]: E0508 05:20:31.718225 2065 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 05:20:31.725404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 05:20:31.725721 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 05:20:34.937262 containerd[1462]: time="2025-05-08T05:20:34.937140238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:34.939083 containerd[1462]: time="2025-05-08T05:20:34.938998974Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" May 8 05:20:34.940409 containerd[1462]: time="2025-05-08T05:20:34.940307559Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:34.944381 containerd[1462]: time="2025-05-08T05:20:34.944319633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:20:34.946952 containerd[1462]: time="2025-05-08T05:20:34.946706279Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.172470267s" May 8 05:20:34.946952 containerd[1462]: time="2025-05-08T05:20:34.946764759Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 8 05:20:39.454143 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:20:39.462507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:20:39.504530 systemd[1]: Reloading requested from client PID 2183 ('systemctl') (unit session-11.scope)... May 8 05:20:39.504550 systemd[1]: Reloading... May 8 05:20:39.591110 zram_generator::config[2222]: No configuration found. May 8 05:20:39.749595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 05:20:39.837047 systemd[1]: Reloading finished in 332 ms. May 8 05:20:39.888416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:20:39.891284 systemd[1]: kubelet.service: Deactivated successfully. May 8 05:20:39.891518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:20:39.897198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:20:40.313311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:20:40.336547 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 05:20:40.411056 kubelet[2291]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 05:20:40.411056 kubelet[2291]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 05:20:40.411056 kubelet[2291]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 05:20:40.411056 kubelet[2291]: I0508 05:20:40.411057 2291 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 05:20:41.722258 kubelet[2291]: I0508 05:20:41.722185 2291 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 05:20:41.722258 kubelet[2291]: I0508 05:20:41.722216 2291 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 05:20:41.723083 kubelet[2291]: I0508 05:20:41.722422 2291 server.go:927] "Client rotation is on, will bootstrap in background" May 8 05:20:41.741637 kubelet[2291]: I0508 05:20:41.741522 2291 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 05:20:41.744622 kubelet[2291]: E0508 05:20:41.744504 2291 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.234:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:41.753138 kubelet[2291]: I0508 05:20:41.753111 2291 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 05:20:41.753332 kubelet[2291]: I0508 05:20:41.753280 2291 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 05:20:41.753502 kubelet[2291]: I0508 05:20:41.753302 2291 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-e0f469a76e.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 05:20:41.753502 kubelet[2291]: I0508 05:20:41.753488 2291 topology_manager.go:138] "Creating topology manager with none policy" May 8 05:20:41.753502 kubelet[2291]: I0508 05:20:41.753500 2291 container_manager_linux.go:301] "Creating device plugin manager" May 8 05:20:41.753901 kubelet[2291]: I0508 05:20:41.753604 2291 state_mem.go:36] "Initialized new in-memory state store" May 8 05:20:41.754774 kubelet[2291]: I0508 05:20:41.754725 2291 kubelet.go:400] "Attempting to sync node with API server" May 8 05:20:41.754774 kubelet[2291]: I0508 05:20:41.754746 2291 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 05:20:41.754774 kubelet[2291]: I0508 05:20:41.754765 2291 kubelet.go:312] "Adding apiserver pod source" May 8 05:20:41.754774 kubelet[2291]: I0508 05:20:41.754779 2291 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 05:20:41.760892 kubelet[2291]: W0508 05:20:41.760067 2291 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.234:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:41.760892 kubelet[2291]: E0508 05:20:41.760123 2291 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.234:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:41.760892 kubelet[2291]: W0508 05:20:41.760199 2291 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-e0f469a76e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:41.760892 kubelet[2291]: E0508 05:20:41.760231 2291 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-e0f469a76e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:41.760892 kubelet[2291]: I0508 05:20:41.760659 2291 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 05:20:41.763351 kubelet[2291]: I0508 05:20:41.762382 2291 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 05:20:41.763351 kubelet[2291]: W0508 05:20:41.762432 2291 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 05:20:41.763351 kubelet[2291]: I0508 05:20:41.763222 2291 server.go:1264] "Started kubelet" May 8 05:20:41.777152 kubelet[2291]: I0508 05:20:41.777131 2291 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 05:20:41.784028 kubelet[2291]: I0508 05:20:41.783284 2291 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 05:20:41.785558 kubelet[2291]: I0508 05:20:41.785542 2291 server.go:455] "Adding debug handlers to kubelet server" May 8 05:20:41.788536 kubelet[2291]: I0508 05:20:41.788492 2291 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 05:20:41.788783 kubelet[2291]: I0508 05:20:41.788770 2291 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 05:20:41.791142 kubelet[2291]: E0508 05:20:41.789238 2291 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.234:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.234:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-e0f469a76e.novalocal.183d75b2e8fde5fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-e0f469a76e.novalocal,UID:ci-4081-3-3-n-e0f469a76e.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-e0f469a76e.novalocal,},FirstTimestamp:2025-05-08 05:20:41.763202556 +0000 UTC m=+1.419156077,LastTimestamp:2025-05-08 05:20:41.763202556 +0000 UTC m=+1.419156077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-e0f469a76e.novalocal,}" May 8 05:20:41.791232 kubelet[2291]: I0508 05:20:41.791218 2291 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 05:20:41.794155 kubelet[2291]: E0508 05:20:41.794107 2291 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-e0f469a76e.novalocal\" not found" May 8 05:20:41.795517 kubelet[2291]: E0508 05:20:41.795490 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-e0f469a76e.novalocal?timeout=10s\": dial tcp 172.24.4.234:6443: connect: connection refused" interval="200ms" May 8 05:20:41.796775 kubelet[2291]: I0508 05:20:41.796762 2291 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 05:20:41.797608 kubelet[2291]: W0508 05:20:41.797519 2291 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:41.797728 kubelet[2291]: E0508 05:20:41.797715 2291 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:41.797992 kubelet[2291]: I0508 05:20:41.797957 2291 factory.go:221] Registration of the containerd container factory successfully May 8 05:20:41.798059 kubelet[2291]: I0508 05:20:41.798050 2291 factory.go:221] Registration of the systemd container factory successfully May 8 05:20:41.798181 kubelet[2291]: I0508 05:20:41.798165 2291 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 05:20:41.799782 kubelet[2291]: I0508 05:20:41.799744 2291 reconciler.go:26] "Reconciler: start to sync state" May 8 05:20:41.813082 kubelet[2291]: E0508 05:20:41.813057 2291 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 05:20:41.818909 kubelet[2291]: I0508 05:20:41.818826 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 05:20:41.820507 kubelet[2291]: I0508 05:20:41.820374 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 05:20:41.820507 kubelet[2291]: I0508 05:20:41.820399 2291 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 05:20:41.820507 kubelet[2291]: I0508 05:20:41.820443 2291 kubelet.go:2337] "Starting kubelet main sync loop" May 8 05:20:41.820808 kubelet[2291]: E0508 05:20:41.820481 2291 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 05:20:41.822296 kubelet[2291]: W0508 05:20:41.822275 2291 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:41.823036 kubelet[2291]: E0508 05:20:41.823021 2291 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:41.828825 kubelet[2291]: I0508 05:20:41.828752 2291 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 05:20:41.829082 kubelet[2291]: I0508 05:20:41.828918 2291 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 05:20:41.829082 kubelet[2291]: I0508 05:20:41.828955 2291 state_mem.go:36] "Initialized new in-memory state store" May 8 05:20:41.835922 kubelet[2291]: I0508 05:20:41.835857 2291 policy_none.go:49] "None policy: Start" May 8 05:20:41.836719 kubelet[2291]: I0508 05:20:41.836664 2291 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 05:20:41.836814 kubelet[2291]: I0508 05:20:41.836693 2291 state_mem.go:35] "Initializing new in-memory state store" May 8 05:20:41.845246 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 05:20:41.859437 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 05:20:41.865353 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 05:20:41.873051 kubelet[2291]: I0508 05:20:41.873011 2291 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 05:20:41.873234 kubelet[2291]: I0508 05:20:41.873181 2291 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 05:20:41.873308 kubelet[2291]: I0508 05:20:41.873296 2291 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 05:20:41.877129 kubelet[2291]: E0508 05:20:41.876437 2291 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-e0f469a76e.novalocal\" not found" May 8 05:20:41.897900 kubelet[2291]: I0508 05:20:41.897791 2291 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:41.898253 kubelet[2291]: E0508 05:20:41.898173 2291 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.234:6443/api/v1/nodes\": dial tcp 172.24.4.234:6443: connect: connection refused" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:41.921578 kubelet[2291]: I0508 05:20:41.921491 2291 topology_manager.go:215] "Topology Admit Handler" podUID="31dfbb04a9cf6a975fcdaf8f27e43913" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:41.923335 kubelet[2291]: I0508 05:20:41.923284 2291 topology_manager.go:215] "Topology Admit Handler" podUID="7e80f897a50a43de0c78a4f854795151" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:41.925722 kubelet[2291]: I0508 05:20:41.925543 2291 topology_manager.go:215] "Topology Admit Handler" podUID="a7f2e4f430db92be75946d9e8c611baf" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:41.939756 systemd[1]: Created slice kubepods-burstable-pod31dfbb04a9cf6a975fcdaf8f27e43913.slice - libcontainer container kubepods-burstable-pod31dfbb04a9cf6a975fcdaf8f27e43913.slice. May 8 05:20:41.961009 systemd[1]: Created slice kubepods-burstable-pod7e80f897a50a43de0c78a4f854795151.slice - libcontainer container kubepods-burstable-pod7e80f897a50a43de0c78a4f854795151.slice. May 8 05:20:41.977295 systemd[1]: Created slice kubepods-burstable-poda7f2e4f430db92be75946d9e8c611baf.slice - libcontainer container kubepods-burstable-poda7f2e4f430db92be75946d9e8c611baf.slice. May 8 05:20:41.996530 kubelet[2291]: E0508 05:20:41.996484 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-e0f469a76e.novalocal?timeout=10s\": dial tcp 172.24.4.234:6443: connect: connection refused" interval="400ms" May 8 05:20:42.000937 kubelet[2291]: I0508 05:20:42.000880 2291 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/31dfbb04a9cf6a975fcdaf8f27e43913-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"31dfbb04a9cf6a975fcdaf8f27e43913\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.001227 kubelet[2291]: I0508 05:20:42.001190 2291 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e80f897a50a43de0c78a4f854795151-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"7e80f897a50a43de0c78a4f854795151\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.001422 kubelet[2291]: I0508 05:20:42.001386 2291 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/31dfbb04a9cf6a975fcdaf8f27e43913-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"31dfbb04a9cf6a975fcdaf8f27e43913\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.001608 kubelet[2291]: I0508 05:20:42.001575 2291 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/31dfbb04a9cf6a975fcdaf8f27e43913-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"31dfbb04a9cf6a975fcdaf8f27e43913\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.001910 kubelet[2291]: I0508 05:20:42.001871 2291 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e80f897a50a43de0c78a4f854795151-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"7e80f897a50a43de0c78a4f854795151\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.002352 kubelet[2291]: I0508 05:20:42.002113 2291 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e80f897a50a43de0c78a4f854795151-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"7e80f897a50a43de0c78a4f854795151\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.002352 kubelet[2291]: I0508 05:20:42.002177 2291 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e80f897a50a43de0c78a4f854795151-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"7e80f897a50a43de0c78a4f854795151\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.002352 kubelet[2291]: I0508 05:20:42.002233 2291 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e80f897a50a43de0c78a4f854795151-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"7e80f897a50a43de0c78a4f854795151\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.002352 kubelet[2291]: I0508 05:20:42.002283 2291 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7f2e4f430db92be75946d9e8c611baf-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"a7f2e4f430db92be75946d9e8c611baf\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.102605 kubelet[2291]: I0508 05:20:42.102392 2291 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.104227 kubelet[2291]: E0508 05:20:42.104155 2291 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.234:6443/api/v1/nodes\": dial tcp 172.24.4.234:6443: connect: connection refused" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.258635 containerd[1462]: time="2025-05-08T05:20:42.258388067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal,Uid:31dfbb04a9cf6a975fcdaf8f27e43913,Namespace:kube-system,Attempt:0,}" May 8 05:20:42.265959 containerd[1462]: time="2025-05-08T05:20:42.265853698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal,Uid:7e80f897a50a43de0c78a4f854795151,Namespace:kube-system,Attempt:0,}" May 8 05:20:42.289143 containerd[1462]: time="2025-05-08T05:20:42.288549434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-e0f469a76e.novalocal,Uid:a7f2e4f430db92be75946d9e8c611baf,Namespace:kube-system,Attempt:0,}" May 8 05:20:42.398021 kubelet[2291]: E0508 05:20:42.397867 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-e0f469a76e.novalocal?timeout=10s\": dial tcp 172.24.4.234:6443: connect: connection refused" interval="800ms" May 8 05:20:42.508571 kubelet[2291]: I0508 05:20:42.508479 2291 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.509288 kubelet[2291]: E0508 05:20:42.509100 2291 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.234:6443/api/v1/nodes\": dial tcp 172.24.4.234:6443: connect: connection refused" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:42.635940 kubelet[2291]: W0508 05:20:42.635837 2291 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:42.636281 kubelet[2291]: E0508 05:20:42.636030 2291 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.234:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:42.640951 kubelet[2291]: E0508 05:20:42.640711 2291 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.234:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.234:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-e0f469a76e.novalocal.183d75b2e8fde5fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-e0f469a76e.novalocal,UID:ci-4081-3-3-n-e0f469a76e.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-e0f469a76e.novalocal,},FirstTimestamp:2025-05-08 05:20:41.763202556 +0000 UTC m=+1.419156077,LastTimestamp:2025-05-08 05:20:41.763202556 +0000 UTC m=+1.419156077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-e0f469a76e.novalocal,}" May 8 05:20:42.880803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3543714162.mount: Deactivated successfully. May 8 05:20:42.892324 containerd[1462]: time="2025-05-08T05:20:42.891918323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 05:20:42.895492 containerd[1462]: time="2025-05-08T05:20:42.895376278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 05:20:42.896911 containerd[1462]: time="2025-05-08T05:20:42.896822821Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 05:20:42.899567 containerd[1462]: time="2025-05-08T05:20:42.899488971Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 05:20:42.902346 containerd[1462]: time="2025-05-08T05:20:42.902108434Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" May 8 05:20:42.902346 containerd[1462]: time="2025-05-08T05:20:42.902293992Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 05:20:42.904555 containerd[1462]: time="2025-05-08T05:20:42.904466737Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 05:20:42.912282 containerd[1462]: time="2025-05-08T05:20:42.912201183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 05:20:42.917633 containerd[1462]: time="2025-05-08T05:20:42.916614661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 627.925024ms" May 8 05:20:42.921669 containerd[1462]: time="2025-05-08T05:20:42.921125551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 655.063111ms" May 8 05:20:42.931335 containerd[1462]: time="2025-05-08T05:20:42.930970315Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 671.572444ms" May 8 05:20:42.991173 kubelet[2291]: W0508 05:20:42.990900 2291 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.234:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:42.991173 kubelet[2291]: E0508 05:20:42.991167 2291 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.234:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:43.197637 containerd[1462]: time="2025-05-08T05:20:43.197214325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:20:43.198571 containerd[1462]: time="2025-05-08T05:20:43.198054922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:20:43.198872 containerd[1462]: time="2025-05-08T05:20:43.198489236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:20:43.199307 containerd[1462]: time="2025-05-08T05:20:43.199087188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:20:43.199424 kubelet[2291]: E0508 05:20:43.199230 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.234:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-e0f469a76e.novalocal?timeout=10s\": dial tcp 172.24.4.234:6443: connect: connection refused" interval="1.6s" May 8 05:20:43.200573 containerd[1462]: time="2025-05-08T05:20:43.199958372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:20:43.200573 containerd[1462]: time="2025-05-08T05:20:43.200150322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:20:43.200573 containerd[1462]: time="2025-05-08T05:20:43.200209392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:20:43.200573 containerd[1462]: time="2025-05-08T05:20:43.200408025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:20:43.209352 containerd[1462]: time="2025-05-08T05:20:43.209137898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:20:43.209840 containerd[1462]: time="2025-05-08T05:20:43.209729187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:20:43.210308 containerd[1462]: time="2025-05-08T05:20:43.210031724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:20:43.211105 containerd[1462]: time="2025-05-08T05:20:43.210771792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:20:43.236200 systemd[1]: Started cri-containerd-e2c0f97790ec4ad6a443abeb70fe4a0025846bcc0aace43e52a09b30c8520ec3.scope - libcontainer container e2c0f97790ec4ad6a443abeb70fe4a0025846bcc0aace43e52a09b30c8520ec3. May 8 05:20:43.239822 systemd[1]: Started cri-containerd-bfdd9a6d3da2bbc7c6dbc820bf456bbf5e2be8dec5c85211c1850bf21baf788b.scope - libcontainer container bfdd9a6d3da2bbc7c6dbc820bf456bbf5e2be8dec5c85211c1850bf21baf788b. May 8 05:20:43.257676 kubelet[2291]: W0508 05:20:43.257599 2291 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:43.257826 kubelet[2291]: E0508 05:20:43.257815 2291 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.234:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:43.260144 systemd[1]: Started cri-containerd-264fd790279eee82a4d27a2e4fc3aebc2420e24b238bbfff64562d9cda5df187.scope - libcontainer container 264fd790279eee82a4d27a2e4fc3aebc2420e24b238bbfff64562d9cda5df187. May 8 05:20:43.312134 kubelet[2291]: I0508 05:20:43.311764 2291 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:43.312475 kubelet[2291]: E0508 05:20:43.312454 2291 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.234:6443/api/v1/nodes\": dial tcp 172.24.4.234:6443: connect: connection refused" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:43.314575 containerd[1462]: time="2025-05-08T05:20:43.314421677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-e0f469a76e.novalocal,Uid:a7f2e4f430db92be75946d9e8c611baf,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2c0f97790ec4ad6a443abeb70fe4a0025846bcc0aace43e52a09b30c8520ec3\"" May 8 05:20:43.324995 containerd[1462]: time="2025-05-08T05:20:43.323955498Z" level=info msg="CreateContainer within sandbox \"e2c0f97790ec4ad6a443abeb70fe4a0025846bcc0aace43e52a09b30c8520ec3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 05:20:43.333882 containerd[1462]: time="2025-05-08T05:20:43.333839235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal,Uid:7e80f897a50a43de0c78a4f854795151,Namespace:kube-system,Attempt:0,} returns sandbox id \"264fd790279eee82a4d27a2e4fc3aebc2420e24b238bbfff64562d9cda5df187\"" May 8 05:20:43.338648 containerd[1462]: time="2025-05-08T05:20:43.338611806Z" level=info msg="CreateContainer within sandbox \"264fd790279eee82a4d27a2e4fc3aebc2420e24b238bbfff64562d9cda5df187\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 05:20:43.339954 containerd[1462]: time="2025-05-08T05:20:43.339916092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal,Uid:31dfbb04a9cf6a975fcdaf8f27e43913,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfdd9a6d3da2bbc7c6dbc820bf456bbf5e2be8dec5c85211c1850bf21baf788b\"" May 8 05:20:43.343064 containerd[1462]: time="2025-05-08T05:20:43.343032818Z" level=info msg="CreateContainer within sandbox \"bfdd9a6d3da2bbc7c6dbc820bf456bbf5e2be8dec5c85211c1850bf21baf788b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 05:20:43.353426 kubelet[2291]: W0508 05:20:43.353362 2291 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-e0f469a76e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:43.353426 kubelet[2291]: E0508 05:20:43.353435 2291 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.234:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-e0f469a76e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.234:6443: connect: connection refused May 8 05:20:43.368787 containerd[1462]: time="2025-05-08T05:20:43.368733139Z" level=info msg="CreateContainer within sandbox \"e2c0f97790ec4ad6a443abeb70fe4a0025846bcc0aace43e52a09b30c8520ec3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e58e72a5f35ca48347a1e0854de14b0ee39424cf9e808ef093287d75e8862a3d\"" May 8 05:20:43.369415 containerd[1462]: time="2025-05-08T05:20:43.369372468Z" level=info msg="StartContainer for \"e58e72a5f35ca48347a1e0854de14b0ee39424cf9e808ef093287d75e8862a3d\"" May 8 05:20:43.389616 containerd[1462]: time="2025-05-08T05:20:43.389516628Z" level=info msg="CreateContainer within sandbox \"264fd790279eee82a4d27a2e4fc3aebc2420e24b238bbfff64562d9cda5df187\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"359355bc92d3e29e3dc17ee06d94f20606844346d178abd73da1f51307e30273\"" May 8 05:20:43.390693 containerd[1462]: time="2025-05-08T05:20:43.390602264Z" level=info msg="StartContainer for \"359355bc92d3e29e3dc17ee06d94f20606844346d178abd73da1f51307e30273\"" May 8 05:20:43.399860 containerd[1462]: time="2025-05-08T05:20:43.399820924Z" level=info msg="CreateContainer within sandbox \"bfdd9a6d3da2bbc7c6dbc820bf456bbf5e2be8dec5c85211c1850bf21baf788b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"da984101313778e971cab677fc836f55a562c74a31dcf185ec8270a5ec46c906\"" May 8 05:20:43.400172 systemd[1]: Started cri-containerd-e58e72a5f35ca48347a1e0854de14b0ee39424cf9e808ef093287d75e8862a3d.scope - libcontainer container e58e72a5f35ca48347a1e0854de14b0ee39424cf9e808ef093287d75e8862a3d. May 8 05:20:43.401506 containerd[1462]: time="2025-05-08T05:20:43.400810600Z" level=info msg="StartContainer for \"da984101313778e971cab677fc836f55a562c74a31dcf185ec8270a5ec46c906\"" May 8 05:20:43.449175 systemd[1]: Started cri-containerd-359355bc92d3e29e3dc17ee06d94f20606844346d178abd73da1f51307e30273.scope - libcontainer container 359355bc92d3e29e3dc17ee06d94f20606844346d178abd73da1f51307e30273. May 8 05:20:43.451157 systemd[1]: Started cri-containerd-da984101313778e971cab677fc836f55a562c74a31dcf185ec8270a5ec46c906.scope - libcontainer container da984101313778e971cab677fc836f55a562c74a31dcf185ec8270a5ec46c906. May 8 05:20:43.480299 containerd[1462]: time="2025-05-08T05:20:43.480253425Z" level=info msg="StartContainer for \"e58e72a5f35ca48347a1e0854de14b0ee39424cf9e808ef093287d75e8862a3d\" returns successfully" May 8 05:20:43.523679 containerd[1462]: time="2025-05-08T05:20:43.523524770Z" level=info msg="StartContainer for \"359355bc92d3e29e3dc17ee06d94f20606844346d178abd73da1f51307e30273\" returns successfully" May 8 05:20:43.537987 containerd[1462]: time="2025-05-08T05:20:43.537927292Z" level=info msg="StartContainer for \"da984101313778e971cab677fc836f55a562c74a31dcf185ec8270a5ec46c906\" returns successfully" May 8 05:20:44.916224 kubelet[2291]: I0508 05:20:44.916058 2291 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:45.495768 kubelet[2291]: E0508 05:20:45.495722 2291 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-e0f469a76e.novalocal\" not found" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:45.641039 kubelet[2291]: I0508 05:20:45.638774 2291 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:45.667203 kubelet[2291]: E0508 05:20:45.667105 2291 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-e0f469a76e.novalocal\" not found" May 8 05:20:45.767943 kubelet[2291]: E0508 05:20:45.767767 2291 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-e0f469a76e.novalocal\" not found" May 8 05:20:45.868717 kubelet[2291]: E0508 05:20:45.868644 2291 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-e0f469a76e.novalocal\" not found" May 8 05:20:46.763687 kubelet[2291]: I0508 05:20:46.763491 2291 apiserver.go:52] "Watching apiserver" May 8 05:20:46.797689 kubelet[2291]: I0508 05:20:46.797594 2291 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 05:20:47.913201 systemd[1]: Reloading requested from client PID 2566 ('systemctl') (unit session-11.scope)... May 8 05:20:47.913237 systemd[1]: Reloading... May 8 05:20:48.037044 zram_generator::config[2605]: No configuration found. May 8 05:20:48.194330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 05:20:48.273370 kubelet[2291]: W0508 05:20:48.272739 2291 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 05:20:48.297575 systemd[1]: Reloading finished in 383 ms. May 8 05:20:48.332903 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:20:48.333686 kubelet[2291]: I0508 05:20:48.333206 2291 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 05:20:48.343277 systemd[1]: kubelet.service: Deactivated successfully. May 8 05:20:48.343532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:20:48.343585 systemd[1]: kubelet.service: Consumed 1.590s CPU time, 117.4M memory peak, 0B memory swap peak. May 8 05:20:48.347324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 05:20:48.763283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 05:20:48.779485 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 05:20:48.861131 kubelet[2669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 05:20:48.862990 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 05:20:48.862990 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 05:20:48.862990 kubelet[2669]: I0508 05:20:48.861678 2669 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 05:20:48.867105 kubelet[2669]: I0508 05:20:48.867069 2669 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 05:20:48.867251 kubelet[2669]: I0508 05:20:48.867240 2669 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 05:20:48.867505 kubelet[2669]: I0508 05:20:48.867493 2669 server.go:927] "Client rotation is on, will bootstrap in background" May 8 05:20:48.869661 kubelet[2669]: I0508 05:20:48.869646 2669 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 05:20:48.871214 kubelet[2669]: I0508 05:20:48.871162 2669 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 05:20:48.883356 kubelet[2669]: I0508 05:20:48.883314 2669 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 05:20:48.884092 kubelet[2669]: I0508 05:20:48.884053 2669 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 05:20:48.884354 kubelet[2669]: I0508 05:20:48.884092 2669 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-e0f469a76e.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 05:20:48.884464 kubelet[2669]: I0508 05:20:48.884368 2669 topology_manager.go:138] "Creating topology manager with none policy" May 8 05:20:48.884464 kubelet[2669]: I0508 05:20:48.884383 2669 container_manager_linux.go:301] "Creating device plugin manager" May 8 05:20:48.884464 kubelet[2669]: I0508 05:20:48.884426 2669 state_mem.go:36] "Initialized new in-memory state store" May 8 05:20:48.884586 kubelet[2669]: I0508 05:20:48.884528 2669 kubelet.go:400] "Attempting to sync node with API server" May 8 05:20:48.884586 kubelet[2669]: I0508 05:20:48.884546 2669 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 05:20:48.884586 kubelet[2669]: I0508 05:20:48.884567 2669 kubelet.go:312] "Adding apiserver pod source" May 8 05:20:48.884586 kubelet[2669]: I0508 05:20:48.884582 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 05:20:48.889168 kubelet[2669]: I0508 05:20:48.889124 2669 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 8 05:20:48.889339 kubelet[2669]: I0508 05:20:48.889302 2669 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 05:20:48.889765 kubelet[2669]: I0508 05:20:48.889731 2669 server.go:1264] "Started kubelet" May 8 05:20:48.898917 kubelet[2669]: I0508 05:20:48.898880 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 05:20:48.905287 kubelet[2669]: E0508 05:20:48.905256 2669 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 05:20:48.908082 kubelet[2669]: I0508 05:20:48.907962 2669 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 05:20:48.908370 kubelet[2669]: I0508 05:20:48.908343 2669 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 05:20:48.909634 kubelet[2669]: I0508 05:20:48.909435 2669 server.go:455] "Adding debug handlers to kubelet server" May 8 05:20:48.910556 kubelet[2669]: I0508 05:20:48.910462 2669 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 05:20:48.910944 kubelet[2669]: I0508 05:20:48.910907 2669 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 05:20:48.913037 kubelet[2669]: I0508 05:20:48.912127 2669 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 05:20:48.913037 kubelet[2669]: I0508 05:20:48.912352 2669 reconciler.go:26] "Reconciler: start to sync state" May 8 05:20:48.921656 kubelet[2669]: I0508 05:20:48.921576 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 05:20:48.923292 kubelet[2669]: I0508 05:20:48.923271 2669 factory.go:221] Registration of the systemd container factory successfully May 8 05:20:48.924291 kubelet[2669]: I0508 05:20:48.924256 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 05:20:48.924352 kubelet[2669]: I0508 05:20:48.924296 2669 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 05:20:48.924352 kubelet[2669]: I0508 05:20:48.924318 2669 kubelet.go:2337] "Starting kubelet main sync loop" May 8 05:20:48.924407 kubelet[2669]: E0508 05:20:48.924360 2669 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 05:20:48.924465 kubelet[2669]: I0508 05:20:48.924443 2669 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 05:20:48.930195 kubelet[2669]: I0508 05:20:48.930164 2669 factory.go:221] Registration of the containerd container factory successfully May 8 05:20:49.003566 kubelet[2669]: I0508 05:20:49.003514 2669 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 05:20:49.003566 kubelet[2669]: I0508 05:20:49.003536 2669 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 05:20:49.003566 kubelet[2669]: I0508 05:20:49.003554 2669 state_mem.go:36] "Initialized new in-memory state store" May 8 05:20:49.004666 kubelet[2669]: I0508 05:20:49.003731 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 05:20:49.004666 kubelet[2669]: I0508 05:20:49.003746 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 05:20:49.004666 kubelet[2669]: I0508 05:20:49.003769 2669 policy_none.go:49] "None policy: Start" May 8 05:20:49.005173 kubelet[2669]: I0508 05:20:49.005057 2669 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 05:20:49.005410 kubelet[2669]: I0508 05:20:49.005255 2669 state_mem.go:35] "Initializing new in-memory state store" May 8 05:20:49.005572 kubelet[2669]: I0508 05:20:49.005560 2669 state_mem.go:75] "Updated machine memory state" May 8 05:20:49.011215 kubelet[2669]: I0508 05:20:49.011152 2669 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 05:20:49.011359 kubelet[2669]: I0508 05:20:49.011313 2669 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 05:20:49.013160 kubelet[2669]: I0508 05:20:49.011430 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 05:20:49.014382 kubelet[2669]: I0508 05:20:49.014284 2669 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.026515 kubelet[2669]: I0508 05:20:49.024484 2669 topology_manager.go:215] "Topology Admit Handler" podUID="7e80f897a50a43de0c78a4f854795151" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.026515 kubelet[2669]: I0508 05:20:49.024586 2669 topology_manager.go:215] "Topology Admit Handler" podUID="a7f2e4f430db92be75946d9e8c611baf" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.026515 kubelet[2669]: I0508 05:20:49.024641 2669 topology_manager.go:215] "Topology Admit Handler" podUID="31dfbb04a9cf6a975fcdaf8f27e43913" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.036913 kubelet[2669]: W0508 05:20:49.034620 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 05:20:49.036913 kubelet[2669]: I0508 05:20:49.035019 2669 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.036913 kubelet[2669]: I0508 05:20:49.035090 2669 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.047136 kubelet[2669]: W0508 05:20:49.046162 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 05:20:49.047136 kubelet[2669]: W0508 05:20:49.046753 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 05:20:49.047136 kubelet[2669]: E0508 05:20:49.046809 2669 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.114049 kubelet[2669]: I0508 05:20:49.113999 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/31dfbb04a9cf6a975fcdaf8f27e43913-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"31dfbb04a9cf6a975fcdaf8f27e43913\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.114049 kubelet[2669]: I0508 05:20:49.114049 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e80f897a50a43de0c78a4f854795151-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"7e80f897a50a43de0c78a4f854795151\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.114049 kubelet[2669]: I0508 05:20:49.114091 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e80f897a50a43de0c78a4f854795151-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"7e80f897a50a43de0c78a4f854795151\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.114049 kubelet[2669]: I0508 05:20:49.114117 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7f2e4f430db92be75946d9e8c611baf-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"a7f2e4f430db92be75946d9e8c611baf\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.114049 kubelet[2669]: I0508 05:20:49.114138 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/31dfbb04a9cf6a975fcdaf8f27e43913-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"31dfbb04a9cf6a975fcdaf8f27e43913\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.114473 kubelet[2669]: I0508 05:20:49.114175 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e80f897a50a43de0c78a4f854795151-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"7e80f897a50a43de0c78a4f854795151\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.114473 kubelet[2669]: I0508 05:20:49.114203 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e80f897a50a43de0c78a4f854795151-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"7e80f897a50a43de0c78a4f854795151\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.114473 kubelet[2669]: I0508 05:20:49.114226 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e80f897a50a43de0c78a4f854795151-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"7e80f897a50a43de0c78a4f854795151\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.114473 kubelet[2669]: I0508 05:20:49.114248 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/31dfbb04a9cf6a975fcdaf8f27e43913-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal\" (UID: \"31dfbb04a9cf6a975fcdaf8f27e43913\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:49.887075 kubelet[2669]: I0508 05:20:49.886871 2669 apiserver.go:52] "Watching apiserver" May 8 05:20:49.915018 kubelet[2669]: I0508 05:20:49.913628 2669 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 05:20:49.982711 kubelet[2669]: I0508 05:20:49.981611 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-e0f469a76e.novalocal" podStartSLOduration=0.981569439 podStartE2EDuration="981.569439ms" podCreationTimestamp="2025-05-08 05:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:20:49.962651625 +0000 UTC m=+1.165682343" watchObservedRunningTime="2025-05-08 05:20:49.981569439 +0000 UTC m=+1.184600106" May 8 05:20:50.009537 kubelet[2669]: I0508 05:20:50.009418 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-e0f469a76e.novalocal" podStartSLOduration=2.009378394 podStartE2EDuration="2.009378394s" podCreationTimestamp="2025-05-08 05:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:20:49.982048085 +0000 UTC m=+1.185078752" watchObservedRunningTime="2025-05-08 05:20:50.009378394 +0000 UTC m=+1.212409061" May 8 05:20:50.011078 kubelet[2669]: W0508 05:20:50.009933 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 8 05:20:50.011078 kubelet[2669]: E0508 05:20:50.010345 2669 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-3-n-e0f469a76e.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:20:50.039674 kubelet[2669]: I0508 05:20:50.038193 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-e0f469a76e.novalocal" podStartSLOduration=1.03817402 podStartE2EDuration="1.03817402s" podCreationTimestamp="2025-05-08 05:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:20:50.012766615 +0000 UTC m=+1.215797353" watchObservedRunningTime="2025-05-08 05:20:50.03817402 +0000 UTC m=+1.241204647" May 8 05:20:54.755361 sudo[1724]: pam_unix(sudo:session): session closed for user root May 8 05:20:54.929562 sshd[1721]: pam_unix(sshd:session): session closed for user core May 8 05:20:54.935320 systemd[1]: sshd@8-172.24.4.234:22-172.24.4.1:44884.service: Deactivated successfully. May 8 05:20:54.939376 systemd[1]: session-11.scope: Deactivated successfully. May 8 05:20:54.939844 systemd[1]: session-11.scope: Consumed 7.581s CPU time, 197.2M memory peak, 0B memory swap peak. May 8 05:20:54.942786 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. May 8 05:20:54.945730 systemd-logind[1443]: Removed session 11. May 8 05:21:03.277714 kubelet[2669]: I0508 05:21:03.277678 2669 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 05:21:03.278473 kubelet[2669]: I0508 05:21:03.278400 2669 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 05:21:03.278508 containerd[1462]: time="2025-05-08T05:21:03.278215245Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 05:21:04.078599 kubelet[2669]: I0508 05:21:04.078468 2669 topology_manager.go:215] "Topology Admit Handler" podUID="c01f9b3a-3000-4c7d-8613-51ed7e3bff26" podNamespace="kube-system" podName="kube-proxy-xwpbp" May 8 05:21:04.096570 systemd[1]: Created slice kubepods-besteffort-podc01f9b3a_3000_4c7d_8613_51ed7e3bff26.slice - libcontainer container kubepods-besteffort-podc01f9b3a_3000_4c7d_8613_51ed7e3bff26.slice. May 8 05:21:04.213458 kubelet[2669]: I0508 05:21:04.213393 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c01f9b3a-3000-4c7d-8613-51ed7e3bff26-kube-proxy\") pod \"kube-proxy-xwpbp\" (UID: \"c01f9b3a-3000-4c7d-8613-51ed7e3bff26\") " pod="kube-system/kube-proxy-xwpbp" May 8 05:21:04.213458 kubelet[2669]: I0508 05:21:04.213442 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddcln\" (UniqueName: \"kubernetes.io/projected/c01f9b3a-3000-4c7d-8613-51ed7e3bff26-kube-api-access-ddcln\") pod \"kube-proxy-xwpbp\" (UID: \"c01f9b3a-3000-4c7d-8613-51ed7e3bff26\") " pod="kube-system/kube-proxy-xwpbp" May 8 05:21:04.213458 kubelet[2669]: I0508 05:21:04.213468 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c01f9b3a-3000-4c7d-8613-51ed7e3bff26-xtables-lock\") pod \"kube-proxy-xwpbp\" (UID: \"c01f9b3a-3000-4c7d-8613-51ed7e3bff26\") " pod="kube-system/kube-proxy-xwpbp" May 8 05:21:04.213458 kubelet[2669]: I0508 05:21:04.213486 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c01f9b3a-3000-4c7d-8613-51ed7e3bff26-lib-modules\") pod \"kube-proxy-xwpbp\" (UID: \"c01f9b3a-3000-4c7d-8613-51ed7e3bff26\") " pod="kube-system/kube-proxy-xwpbp" May 8 05:21:04.305038 kubelet[2669]: I0508 05:21:04.303509 2669 topology_manager.go:215] "Topology Admit Handler" podUID="51480ff6-5151-49fc-a1ff-f204a689ede0" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-62894" May 8 05:21:04.328084 systemd[1]: Created slice kubepods-besteffort-pod51480ff6_5151_49fc_a1ff_f204a689ede0.slice - libcontainer container kubepods-besteffort-pod51480ff6_5151_49fc_a1ff_f204a689ede0.slice. May 8 05:21:04.412193 containerd[1462]: time="2025-05-08T05:21:04.411813996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xwpbp,Uid:c01f9b3a-3000-4c7d-8613-51ed7e3bff26,Namespace:kube-system,Attempt:0,}" May 8 05:21:04.415471 kubelet[2669]: I0508 05:21:04.415405 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/51480ff6-5151-49fc-a1ff-f204a689ede0-var-lib-calico\") pod \"tigera-operator-797db67f8-62894\" (UID: \"51480ff6-5151-49fc-a1ff-f204a689ede0\") " pod="tigera-operator/tigera-operator-797db67f8-62894" May 8 05:21:04.415471 kubelet[2669]: I0508 05:21:04.415467 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7crn\" (UniqueName: \"kubernetes.io/projected/51480ff6-5151-49fc-a1ff-f204a689ede0-kube-api-access-x7crn\") pod \"tigera-operator-797db67f8-62894\" (UID: \"51480ff6-5151-49fc-a1ff-f204a689ede0\") " pod="tigera-operator/tigera-operator-797db67f8-62894" May 8 05:21:04.459678 containerd[1462]: time="2025-05-08T05:21:04.458863294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:21:04.459678 containerd[1462]: time="2025-05-08T05:21:04.459656381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:21:04.460286 containerd[1462]: time="2025-05-08T05:21:04.459691396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:04.460286 containerd[1462]: time="2025-05-08T05:21:04.459950072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:04.491289 systemd[1]: Started cri-containerd-415037db1c8302cc8a61111e1d63015513c003470768cb26b7c46ad8e3432446.scope - libcontainer container 415037db1c8302cc8a61111e1d63015513c003470768cb26b7c46ad8e3432446. May 8 05:21:04.532967 containerd[1462]: time="2025-05-08T05:21:04.532896881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xwpbp,Uid:c01f9b3a-3000-4c7d-8613-51ed7e3bff26,Namespace:kube-system,Attempt:0,} returns sandbox id \"415037db1c8302cc8a61111e1d63015513c003470768cb26b7c46ad8e3432446\"" May 8 05:21:04.538409 containerd[1462]: time="2025-05-08T05:21:04.538292091Z" level=info msg="CreateContainer within sandbox \"415037db1c8302cc8a61111e1d63015513c003470768cb26b7c46ad8e3432446\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 05:21:04.568318 containerd[1462]: time="2025-05-08T05:21:04.568226517Z" level=info msg="CreateContainer within sandbox \"415037db1c8302cc8a61111e1d63015513c003470768cb26b7c46ad8e3432446\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5f4d9d058685430a46d724ba4b88d1476ea941693e97d9f31024712b9cc2098b\"" May 8 05:21:04.570092 containerd[1462]: time="2025-05-08T05:21:04.568818787Z" level=info msg="StartContainer for \"5f4d9d058685430a46d724ba4b88d1476ea941693e97d9f31024712b9cc2098b\"" May 8 05:21:04.602114 systemd[1]: Started cri-containerd-5f4d9d058685430a46d724ba4b88d1476ea941693e97d9f31024712b9cc2098b.scope - libcontainer container 5f4d9d058685430a46d724ba4b88d1476ea941693e97d9f31024712b9cc2098b. May 8 05:21:04.630611 containerd[1462]: time="2025-05-08T05:21:04.630561540Z" level=info msg="StartContainer for \"5f4d9d058685430a46d724ba4b88d1476ea941693e97d9f31024712b9cc2098b\" returns successfully" May 8 05:21:04.632925 containerd[1462]: time="2025-05-08T05:21:04.632895344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-62894,Uid:51480ff6-5151-49fc-a1ff-f204a689ede0,Namespace:tigera-operator,Attempt:0,}" May 8 05:21:04.674677 containerd[1462]: time="2025-05-08T05:21:04.673577601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:21:04.676420 containerd[1462]: time="2025-05-08T05:21:04.676235261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:21:04.676420 containerd[1462]: time="2025-05-08T05:21:04.676266099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:04.676695 containerd[1462]: time="2025-05-08T05:21:04.676371556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:04.699143 systemd[1]: Started cri-containerd-81c7a7b151008380fd38b9d842e124e1a733c91f3f5fcb68cec072a4b7705089.scope - libcontainer container 81c7a7b151008380fd38b9d842e124e1a733c91f3f5fcb68cec072a4b7705089. May 8 05:21:04.750753 containerd[1462]: time="2025-05-08T05:21:04.750697871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-62894,Uid:51480ff6-5151-49fc-a1ff-f204a689ede0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"81c7a7b151008380fd38b9d842e124e1a733c91f3f5fcb68cec072a4b7705089\"" May 8 05:21:04.753713 containerd[1462]: time="2025-05-08T05:21:04.753686140Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 8 05:21:05.047056 kubelet[2669]: I0508 05:21:05.046912 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xwpbp" podStartSLOduration=1.046894003 podStartE2EDuration="1.046894003s" podCreationTimestamp="2025-05-08 05:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:21:05.046247262 +0000 UTC m=+16.249277889" watchObservedRunningTime="2025-05-08 05:21:05.046894003 +0000 UTC m=+16.249924621" May 8 05:21:06.565059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2760418569.mount: Deactivated successfully. May 8 05:21:07.167349 containerd[1462]: time="2025-05-08T05:21:07.167299527Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:07.168863 containerd[1462]: time="2025-05-08T05:21:07.168693920Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 8 05:21:07.170486 containerd[1462]: time="2025-05-08T05:21:07.170442848Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:07.173265 containerd[1462]: time="2025-05-08T05:21:07.173181891Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:07.174181 containerd[1462]: time="2025-05-08T05:21:07.174053726Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.420329774s" May 8 05:21:07.174181 containerd[1462]: time="2025-05-08T05:21:07.174118076Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 8 05:21:07.177874 containerd[1462]: time="2025-05-08T05:21:07.177728793Z" level=info msg="CreateContainer within sandbox \"81c7a7b151008380fd38b9d842e124e1a733c91f3f5fcb68cec072a4b7705089\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 8 05:21:07.201738 containerd[1462]: time="2025-05-08T05:21:07.201694371Z" level=info msg="CreateContainer within sandbox \"81c7a7b151008380fd38b9d842e124e1a733c91f3f5fcb68cec072a4b7705089\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"63d3f8a65e93ac13c5ef46747697a978c2d795657198f881cbf234a07dc045e3\"" May 8 05:21:07.202900 containerd[1462]: time="2025-05-08T05:21:07.202384384Z" level=info msg="StartContainer for \"63d3f8a65e93ac13c5ef46747697a978c2d795657198f881cbf234a07dc045e3\"" May 8 05:21:07.239139 systemd[1]: Started cri-containerd-63d3f8a65e93ac13c5ef46747697a978c2d795657198f881cbf234a07dc045e3.scope - libcontainer container 63d3f8a65e93ac13c5ef46747697a978c2d795657198f881cbf234a07dc045e3. May 8 05:21:07.282744 containerd[1462]: time="2025-05-08T05:21:07.282419728Z" level=info msg="StartContainer for \"63d3f8a65e93ac13c5ef46747697a978c2d795657198f881cbf234a07dc045e3\" returns successfully" May 8 05:21:08.059203 kubelet[2669]: I0508 05:21:08.059040 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-62894" podStartSLOduration=1.635498283 podStartE2EDuration="4.058954106s" podCreationTimestamp="2025-05-08 05:21:04 +0000 UTC" firstStartedPulling="2025-05-08 05:21:04.752285265 +0000 UTC m=+15.955315892" lastFinishedPulling="2025-05-08 05:21:07.175741088 +0000 UTC m=+18.378771715" observedRunningTime="2025-05-08 05:21:08.057033516 +0000 UTC m=+19.260064213" watchObservedRunningTime="2025-05-08 05:21:08.058954106 +0000 UTC m=+19.261984773" May 8 05:21:10.542671 kubelet[2669]: I0508 05:21:10.542605 2669 topology_manager.go:215] "Topology Admit Handler" podUID="45b5a457-ce43-4c5d-a44b-9263f1a2c33f" podNamespace="calico-system" podName="calico-typha-655ff9dd79-2t2xx" May 8 05:21:10.555962 systemd[1]: Created slice kubepods-besteffort-pod45b5a457_ce43_4c5d_a44b_9263f1a2c33f.slice - libcontainer container kubepods-besteffort-pod45b5a457_ce43_4c5d_a44b_9263f1a2c33f.slice. May 8 05:21:10.659337 kubelet[2669]: I0508 05:21:10.659186 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/45b5a457-ce43-4c5d-a44b-9263f1a2c33f-typha-certs\") pod \"calico-typha-655ff9dd79-2t2xx\" (UID: \"45b5a457-ce43-4c5d-a44b-9263f1a2c33f\") " pod="calico-system/calico-typha-655ff9dd79-2t2xx" May 8 05:21:10.659337 kubelet[2669]: I0508 05:21:10.659240 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnpdd\" (UniqueName: \"kubernetes.io/projected/45b5a457-ce43-4c5d-a44b-9263f1a2c33f-kube-api-access-lnpdd\") pod \"calico-typha-655ff9dd79-2t2xx\" (UID: \"45b5a457-ce43-4c5d-a44b-9263f1a2c33f\") " pod="calico-system/calico-typha-655ff9dd79-2t2xx" May 8 05:21:10.659337 kubelet[2669]: I0508 05:21:10.659266 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45b5a457-ce43-4c5d-a44b-9263f1a2c33f-tigera-ca-bundle\") pod \"calico-typha-655ff9dd79-2t2xx\" (UID: \"45b5a457-ce43-4c5d-a44b-9263f1a2c33f\") " pod="calico-system/calico-typha-655ff9dd79-2t2xx" May 8 05:21:10.669794 kubelet[2669]: I0508 05:21:10.669739 2669 topology_manager.go:215] "Topology Admit Handler" podUID="7bfe314c-ae20-416a-8840-b3acbb76140c" podNamespace="calico-system" podName="calico-node-t42ml" May 8 05:21:10.675704 kubelet[2669]: W0508 05:21:10.675220 2669 reflector.go:547] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4081-3-3-n-e0f469a76e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-3-n-e0f469a76e.novalocal' and this object May 8 05:21:10.675704 kubelet[2669]: E0508 05:21:10.675256 2669 reflector.go:150] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4081-3-3-n-e0f469a76e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-3-n-e0f469a76e.novalocal' and this object May 8 05:21:10.675704 kubelet[2669]: W0508 05:21:10.675315 2669 reflector.go:547] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081-3-3-n-e0f469a76e.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-3-n-e0f469a76e.novalocal' and this object May 8 05:21:10.675704 kubelet[2669]: E0508 05:21:10.675346 2669 reflector.go:150] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081-3-3-n-e0f469a76e.novalocal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-3-n-e0f469a76e.novalocal' and this object May 8 05:21:10.679046 systemd[1]: Created slice kubepods-besteffort-pod7bfe314c_ae20_416a_8840_b3acbb76140c.slice - libcontainer container kubepods-besteffort-pod7bfe314c_ae20_416a_8840_b3acbb76140c.slice. May 8 05:21:10.807073 kubelet[2669]: I0508 05:21:10.806432 2669 topology_manager.go:215] "Topology Admit Handler" podUID="752f8b41-8238-4fc9-90e8-01e9c2d7826e" podNamespace="calico-system" podName="csi-node-driver-lgt7d" May 8 05:21:10.807073 kubelet[2669]: E0508 05:21:10.806709 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgt7d" podUID="752f8b41-8238-4fc9-90e8-01e9c2d7826e" May 8 05:21:10.860896 kubelet[2669]: I0508 05:21:10.860856 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7bfe314c-ae20-416a-8840-b3acbb76140c-flexvol-driver-host\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.861230 kubelet[2669]: I0508 05:21:10.861187 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/752f8b41-8238-4fc9-90e8-01e9c2d7826e-registration-dir\") pod \"csi-node-driver-lgt7d\" (UID: \"752f8b41-8238-4fc9-90e8-01e9c2d7826e\") " pod="calico-system/csi-node-driver-lgt7d" May 8 05:21:10.863004 kubelet[2669]: I0508 05:21:10.861705 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7bfe314c-ae20-416a-8840-b3acbb76140c-var-run-calico\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.863004 kubelet[2669]: I0508 05:21:10.861733 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bfe314c-ae20-416a-8840-b3acbb76140c-lib-modules\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.863004 kubelet[2669]: I0508 05:21:10.861753 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bfe314c-ae20-416a-8840-b3acbb76140c-xtables-lock\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.863004 kubelet[2669]: I0508 05:21:10.861773 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/752f8b41-8238-4fc9-90e8-01e9c2d7826e-kubelet-dir\") pod \"csi-node-driver-lgt7d\" (UID: \"752f8b41-8238-4fc9-90e8-01e9c2d7826e\") " pod="calico-system/csi-node-driver-lgt7d" May 8 05:21:10.863004 kubelet[2669]: I0508 05:21:10.861792 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/752f8b41-8238-4fc9-90e8-01e9c2d7826e-socket-dir\") pod \"csi-node-driver-lgt7d\" (UID: \"752f8b41-8238-4fc9-90e8-01e9c2d7826e\") " pod="calico-system/csi-node-driver-lgt7d" May 8 05:21:10.863207 containerd[1462]: time="2025-05-08T05:21:10.861863521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-655ff9dd79-2t2xx,Uid:45b5a457-ce43-4c5d-a44b-9263f1a2c33f,Namespace:calico-system,Attempt:0,}" May 8 05:21:10.863475 kubelet[2669]: I0508 05:21:10.861810 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7bfe314c-ae20-416a-8840-b3acbb76140c-policysync\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.863475 kubelet[2669]: I0508 05:21:10.861836 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bfe314c-ae20-416a-8840-b3acbb76140c-var-lib-calico\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.863475 kubelet[2669]: I0508 05:21:10.861858 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7bfe314c-ae20-416a-8840-b3acbb76140c-cni-log-dir\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.863475 kubelet[2669]: I0508 05:21:10.861880 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/752f8b41-8238-4fc9-90e8-01e9c2d7826e-varrun\") pod \"csi-node-driver-lgt7d\" (UID: \"752f8b41-8238-4fc9-90e8-01e9c2d7826e\") " pod="calico-system/csi-node-driver-lgt7d" May 8 05:21:10.863475 kubelet[2669]: I0508 05:21:10.861898 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7bfe314c-ae20-416a-8840-b3acbb76140c-node-certs\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.863641 kubelet[2669]: I0508 05:21:10.861916 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bfe314c-ae20-416a-8840-b3acbb76140c-tigera-ca-bundle\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.863641 kubelet[2669]: I0508 05:21:10.861935 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7bfe314c-ae20-416a-8840-b3acbb76140c-cni-net-dir\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.863641 kubelet[2669]: I0508 05:21:10.861953 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfxlq\" (UniqueName: \"kubernetes.io/projected/7bfe314c-ae20-416a-8840-b3acbb76140c-kube-api-access-xfxlq\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.863641 kubelet[2669]: I0508 05:21:10.861991 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7bfe314c-ae20-416a-8840-b3acbb76140c-cni-bin-dir\") pod \"calico-node-t42ml\" (UID: \"7bfe314c-ae20-416a-8840-b3acbb76140c\") " pod="calico-system/calico-node-t42ml" May 8 05:21:10.863641 kubelet[2669]: I0508 05:21:10.862017 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2q2m\" (UniqueName: \"kubernetes.io/projected/752f8b41-8238-4fc9-90e8-01e9c2d7826e-kube-api-access-s2q2m\") pod \"csi-node-driver-lgt7d\" (UID: \"752f8b41-8238-4fc9-90e8-01e9c2d7826e\") " pod="calico-system/csi-node-driver-lgt7d" May 8 05:21:10.909878 containerd[1462]: time="2025-05-08T05:21:10.909618135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:21:10.909878 containerd[1462]: time="2025-05-08T05:21:10.909674871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:21:10.909878 containerd[1462]: time="2025-05-08T05:21:10.909688517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:10.909878 containerd[1462]: time="2025-05-08T05:21:10.909816837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:10.946514 systemd[1]: Started cri-containerd-c1fd0368b22ce56d27b15cbd95c11ed5a57fa10c7794a1b236176cff0fe77e50.scope - libcontainer container c1fd0368b22ce56d27b15cbd95c11ed5a57fa10c7794a1b236176cff0fe77e50. May 8 05:21:10.964557 kubelet[2669]: E0508 05:21:10.964430 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.964557 kubelet[2669]: W0508 05:21:10.964452 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.964557 kubelet[2669]: E0508 05:21:10.964475 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.966188 kubelet[2669]: E0508 05:21:10.966081 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.966188 kubelet[2669]: W0508 05:21:10.966094 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.966188 kubelet[2669]: E0508 05:21:10.966105 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.966472 kubelet[2669]: E0508 05:21:10.966368 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.966472 kubelet[2669]: W0508 05:21:10.966378 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.966472 kubelet[2669]: E0508 05:21:10.966388 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.966809 kubelet[2669]: E0508 05:21:10.966716 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.966809 kubelet[2669]: W0508 05:21:10.966726 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.966809 kubelet[2669]: E0508 05:21:10.966749 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.967143 kubelet[2669]: E0508 05:21:10.967039 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.967143 kubelet[2669]: W0508 05:21:10.967051 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.967143 kubelet[2669]: E0508 05:21:10.967074 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.967509 kubelet[2669]: E0508 05:21:10.967365 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.967509 kubelet[2669]: W0508 05:21:10.967376 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.967509 kubelet[2669]: E0508 05:21:10.967469 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.967990 kubelet[2669]: E0508 05:21:10.967730 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.967990 kubelet[2669]: W0508 05:21:10.967741 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.967990 kubelet[2669]: E0508 05:21:10.968010 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.968402 kubelet[2669]: E0508 05:21:10.968236 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.968402 kubelet[2669]: W0508 05:21:10.968247 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.968402 kubelet[2669]: E0508 05:21:10.968380 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.969171 kubelet[2669]: E0508 05:21:10.969053 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.969171 kubelet[2669]: W0508 05:21:10.969064 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.969171 kubelet[2669]: E0508 05:21:10.969152 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.969758 kubelet[2669]: E0508 05:21:10.969358 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.969758 kubelet[2669]: W0508 05:21:10.969368 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.969758 kubelet[2669]: E0508 05:21:10.969738 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.970136 kubelet[2669]: E0508 05:21:10.969969 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.970136 kubelet[2669]: W0508 05:21:10.970002 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.970136 kubelet[2669]: E0508 05:21:10.970092 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.970665 kubelet[2669]: E0508 05:21:10.970571 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.970665 kubelet[2669]: W0508 05:21:10.970582 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.970822 kubelet[2669]: E0508 05:21:10.970741 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.970950 kubelet[2669]: E0508 05:21:10.970902 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.970950 kubelet[2669]: W0508 05:21:10.970912 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.971254 kubelet[2669]: E0508 05:21:10.971018 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.971483 kubelet[2669]: E0508 05:21:10.971389 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.971483 kubelet[2669]: W0508 05:21:10.971398 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.972122 kubelet[2669]: E0508 05:21:10.972004 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.972265 kubelet[2669]: E0508 05:21:10.972203 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.972265 kubelet[2669]: W0508 05:21:10.972213 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.972378 kubelet[2669]: E0508 05:21:10.972303 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.972619 kubelet[2669]: E0508 05:21:10.972523 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.972619 kubelet[2669]: W0508 05:21:10.972532 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.973708 kubelet[2669]: E0508 05:21:10.973607 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.973865 kubelet[2669]: E0508 05:21:10.973807 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.973865 kubelet[2669]: W0508 05:21:10.973817 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.974058 kubelet[2669]: E0508 05:21:10.973968 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.974264 kubelet[2669]: E0508 05:21:10.974189 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.974264 kubelet[2669]: W0508 05:21:10.974199 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.974439 kubelet[2669]: E0508 05:21:10.974346 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.974526 kubelet[2669]: E0508 05:21:10.974516 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.974647 kubelet[2669]: W0508 05:21:10.974578 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.974800 kubelet[2669]: E0508 05:21:10.974706 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.974910 kubelet[2669]: E0508 05:21:10.974888 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.974910 kubelet[2669]: W0508 05:21:10.974898 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.975126 kubelet[2669]: E0508 05:21:10.975020 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.977099 kubelet[2669]: E0508 05:21:10.976770 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.977099 kubelet[2669]: W0508 05:21:10.976782 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.977099 kubelet[2669]: E0508 05:21:10.976796 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.978088 kubelet[2669]: E0508 05:21:10.978076 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.978154 kubelet[2669]: W0508 05:21:10.978144 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.978213 kubelet[2669]: E0508 05:21:10.978202 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.978554 kubelet[2669]: E0508 05:21:10.978543 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.978713 kubelet[2669]: W0508 05:21:10.978627 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.978713 kubelet[2669]: E0508 05:21:10.978652 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.980109 kubelet[2669]: E0508 05:21:10.980040 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.980109 kubelet[2669]: W0508 05:21:10.980053 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.980109 kubelet[2669]: E0508 05:21:10.980063 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.993286 kubelet[2669]: E0508 05:21:10.993190 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.993286 kubelet[2669]: W0508 05:21:10.993210 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.993286 kubelet[2669]: E0508 05:21:10.993251 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:10.997765 kubelet[2669]: E0508 05:21:10.997696 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:10.997765 kubelet[2669]: W0508 05:21:10.997713 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:10.997765 kubelet[2669]: E0508 05:21:10.997732 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:11.044415 containerd[1462]: time="2025-05-08T05:21:11.044329367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-655ff9dd79-2t2xx,Uid:45b5a457-ce43-4c5d-a44b-9263f1a2c33f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1fd0368b22ce56d27b15cbd95c11ed5a57fa10c7794a1b236176cff0fe77e50\"" May 8 05:21:11.048746 containerd[1462]: time="2025-05-08T05:21:11.048673250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 8 05:21:11.064230 kubelet[2669]: E0508 05:21:11.063957 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:11.064230 kubelet[2669]: W0508 05:21:11.064102 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:11.064230 kubelet[2669]: E0508 05:21:11.064125 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:11.167764 kubelet[2669]: E0508 05:21:11.167629 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:11.167764 kubelet[2669]: W0508 05:21:11.167659 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:11.167764 kubelet[2669]: E0508 05:21:11.167689 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:11.268446 kubelet[2669]: E0508 05:21:11.268404 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:11.268446 kubelet[2669]: W0508 05:21:11.268426 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:11.268446 kubelet[2669]: E0508 05:21:11.268445 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:11.369961 kubelet[2669]: E0508 05:21:11.369781 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:11.369961 kubelet[2669]: W0508 05:21:11.369823 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:11.369961 kubelet[2669]: E0508 05:21:11.369859 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:11.471450 kubelet[2669]: E0508 05:21:11.471374 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:11.471450 kubelet[2669]: W0508 05:21:11.471425 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:11.471450 kubelet[2669]: E0508 05:21:11.471461 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:11.573267 kubelet[2669]: E0508 05:21:11.573216 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:11.573267 kubelet[2669]: W0508 05:21:11.573256 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:11.574053 kubelet[2669]: E0508 05:21:11.573293 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:11.675311 kubelet[2669]: E0508 05:21:11.675124 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:11.675311 kubelet[2669]: W0508 05:21:11.675172 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:11.675311 kubelet[2669]: E0508 05:21:11.675208 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:11.775089 systemd[1]: run-containerd-runc-k8s.io-c1fd0368b22ce56d27b15cbd95c11ed5a57fa10c7794a1b236176cff0fe77e50-runc.VcJd0E.mount: Deactivated successfully. May 8 05:21:11.786911 kubelet[2669]: E0508 05:21:11.784395 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:11.786911 kubelet[2669]: W0508 05:21:11.784437 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:11.786911 kubelet[2669]: E0508 05:21:11.784479 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:11.867653 kubelet[2669]: E0508 05:21:11.867614 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:11.867653 kubelet[2669]: W0508 05:21:11.867635 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:11.867653 kubelet[2669]: E0508 05:21:11.867654 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:11.884048 containerd[1462]: time="2025-05-08T05:21:11.883964118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t42ml,Uid:7bfe314c-ae20-416a-8840-b3acbb76140c,Namespace:calico-system,Attempt:0,}" May 8 05:21:11.929355 containerd[1462]: time="2025-05-08T05:21:11.928415513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:21:11.929355 containerd[1462]: time="2025-05-08T05:21:11.928550095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:21:11.929355 containerd[1462]: time="2025-05-08T05:21:11.928598266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:11.929355 containerd[1462]: time="2025-05-08T05:21:11.928755450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:11.964629 systemd[1]: Started cri-containerd-748e390f22c6020284f59208e7b88919c70a70bbf45b125035371701bbce37b8.scope - libcontainer container 748e390f22c6020284f59208e7b88919c70a70bbf45b125035371701bbce37b8. May 8 05:21:11.991892 containerd[1462]: time="2025-05-08T05:21:11.991726581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t42ml,Uid:7bfe314c-ae20-416a-8840-b3acbb76140c,Namespace:calico-system,Attempt:0,} returns sandbox id \"748e390f22c6020284f59208e7b88919c70a70bbf45b125035371701bbce37b8\"" May 8 05:21:12.926238 kubelet[2669]: E0508 05:21:12.925216 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgt7d" podUID="752f8b41-8238-4fc9-90e8-01e9c2d7826e" May 8 05:21:14.513943 containerd[1462]: time="2025-05-08T05:21:14.513751030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:14.516875 containerd[1462]: time="2025-05-08T05:21:14.516390298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 8 05:21:14.518069 containerd[1462]: time="2025-05-08T05:21:14.518031405Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:14.521605 containerd[1462]: time="2025-05-08T05:21:14.521554810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:14.523069 containerd[1462]: time="2025-05-08T05:21:14.523026329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.474314266s" May 8 05:21:14.523069 containerd[1462]: time="2025-05-08T05:21:14.523060673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 8 05:21:14.526456 containerd[1462]: time="2025-05-08T05:21:14.526196382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 8 05:21:14.544871 containerd[1462]: time="2025-05-08T05:21:14.544820426Z" level=info msg="CreateContainer within sandbox \"c1fd0368b22ce56d27b15cbd95c11ed5a57fa10c7794a1b236176cff0fe77e50\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 8 05:21:14.567743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440548633.mount: Deactivated successfully. May 8 05:21:14.575891 containerd[1462]: time="2025-05-08T05:21:14.575827505Z" level=info msg="CreateContainer within sandbox \"c1fd0368b22ce56d27b15cbd95c11ed5a57fa10c7794a1b236176cff0fe77e50\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"31d780b17bc4db207e66f7e01f7c64cbf73fcd8f2fb7684d840f01e60d811546\"" May 8 05:21:14.576592 containerd[1462]: time="2025-05-08T05:21:14.576543276Z" level=info msg="StartContainer for \"31d780b17bc4db207e66f7e01f7c64cbf73fcd8f2fb7684d840f01e60d811546\"" May 8 05:21:14.614243 systemd[1]: Started cri-containerd-31d780b17bc4db207e66f7e01f7c64cbf73fcd8f2fb7684d840f01e60d811546.scope - libcontainer container 31d780b17bc4db207e66f7e01f7c64cbf73fcd8f2fb7684d840f01e60d811546. May 8 05:21:14.673606 containerd[1462]: time="2025-05-08T05:21:14.673555845Z" level=info msg="StartContainer for \"31d780b17bc4db207e66f7e01f7c64cbf73fcd8f2fb7684d840f01e60d811546\" returns successfully" May 8 05:21:14.927519 kubelet[2669]: E0508 05:21:14.925375 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgt7d" podUID="752f8b41-8238-4fc9-90e8-01e9c2d7826e" May 8 05:21:15.083863 kubelet[2669]: I0508 05:21:15.083617 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-655ff9dd79-2t2xx" podStartSLOduration=1.6064335220000001 podStartE2EDuration="5.083476745s" podCreationTimestamp="2025-05-08 05:21:10 +0000 UTC" firstStartedPulling="2025-05-08 05:21:11.047550345 +0000 UTC m=+22.250580972" lastFinishedPulling="2025-05-08 05:21:14.524593577 +0000 UTC m=+25.727624195" observedRunningTime="2025-05-08 05:21:15.081348565 +0000 UTC m=+26.284379242" watchObservedRunningTime="2025-05-08 05:21:15.083476745 +0000 UTC m=+26.286507362" May 8 05:21:15.094439 kubelet[2669]: E0508 05:21:15.094377 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.094684 kubelet[2669]: W0508 05:21:15.094402 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.094684 kubelet[2669]: E0508 05:21:15.094583 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.095099 kubelet[2669]: E0508 05:21:15.094926 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.095099 kubelet[2669]: W0508 05:21:15.094936 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.095099 kubelet[2669]: E0508 05:21:15.094947 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.095584 kubelet[2669]: E0508 05:21:15.095535 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.095584 kubelet[2669]: W0508 05:21:15.095557 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.095584 kubelet[2669]: E0508 05:21:15.095589 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.096073 kubelet[2669]: E0508 05:21:15.096009 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.096073 kubelet[2669]: W0508 05:21:15.096025 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.096073 kubelet[2669]: E0508 05:21:15.096035 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.097315 kubelet[2669]: E0508 05:21:15.097180 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.097315 kubelet[2669]: W0508 05:21:15.097197 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.097315 kubelet[2669]: E0508 05:21:15.097209 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.098114 kubelet[2669]: E0508 05:21:15.097529 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.098114 kubelet[2669]: W0508 05:21:15.097539 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.098114 kubelet[2669]: E0508 05:21:15.097549 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.098114 kubelet[2669]: E0508 05:21:15.097926 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.098114 kubelet[2669]: W0508 05:21:15.097936 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.098114 kubelet[2669]: E0508 05:21:15.097945 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.098485 kubelet[2669]: E0508 05:21:15.098221 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.098485 kubelet[2669]: W0508 05:21:15.098231 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.098485 kubelet[2669]: E0508 05:21:15.098243 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.099020 kubelet[2669]: E0508 05:21:15.098943 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.099020 kubelet[2669]: W0508 05:21:15.098959 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.099020 kubelet[2669]: E0508 05:21:15.098969 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.099566 kubelet[2669]: E0508 05:21:15.099493 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.099566 kubelet[2669]: W0508 05:21:15.099527 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.099566 kubelet[2669]: E0508 05:21:15.099542 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.100293 kubelet[2669]: E0508 05:21:15.100264 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.100293 kubelet[2669]: W0508 05:21:15.100282 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.100293 kubelet[2669]: E0508 05:21:15.100293 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.100896 kubelet[2669]: E0508 05:21:15.100867 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.100896 kubelet[2669]: W0508 05:21:15.100883 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.100896 kubelet[2669]: E0508 05:21:15.100893 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.101393 kubelet[2669]: E0508 05:21:15.101345 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.101393 kubelet[2669]: W0508 05:21:15.101365 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.101532 kubelet[2669]: E0508 05:21:15.101435 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.101930 kubelet[2669]: E0508 05:21:15.101900 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.101930 kubelet[2669]: W0508 05:21:15.101916 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.102192 kubelet[2669]: E0508 05:21:15.101946 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.102557 kubelet[2669]: E0508 05:21:15.102527 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.102557 kubelet[2669]: W0508 05:21:15.102543 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.102557 kubelet[2669]: E0508 05:21:15.102555 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.114152 kubelet[2669]: E0508 05:21:15.114098 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.114152 kubelet[2669]: W0508 05:21:15.114118 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.114152 kubelet[2669]: E0508 05:21:15.114128 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.114434 kubelet[2669]: E0508 05:21:15.114391 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.114434 kubelet[2669]: W0508 05:21:15.114402 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.114434 kubelet[2669]: E0508 05:21:15.114413 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.114638 kubelet[2669]: E0508 05:21:15.114623 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.114638 kubelet[2669]: W0508 05:21:15.114632 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.114764 kubelet[2669]: E0508 05:21:15.114653 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.114881 kubelet[2669]: E0508 05:21:15.114855 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.114881 kubelet[2669]: W0508 05:21:15.114870 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.115048 kubelet[2669]: E0508 05:21:15.114890 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.115137 kubelet[2669]: E0508 05:21:15.115085 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.115137 kubelet[2669]: W0508 05:21:15.115095 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.115137 kubelet[2669]: E0508 05:21:15.115107 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.115543 kubelet[2669]: E0508 05:21:15.115352 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.115543 kubelet[2669]: W0508 05:21:15.115362 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.115543 kubelet[2669]: E0508 05:21:15.115436 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.115733 kubelet[2669]: E0508 05:21:15.115557 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.115733 kubelet[2669]: W0508 05:21:15.115566 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.115733 kubelet[2669]: E0508 05:21:15.115654 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.115905 kubelet[2669]: E0508 05:21:15.115758 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.115905 kubelet[2669]: W0508 05:21:15.115767 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.115905 kubelet[2669]: E0508 05:21:15.115853 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.116179 kubelet[2669]: E0508 05:21:15.115958 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.116179 kubelet[2669]: W0508 05:21:15.115967 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.116179 kubelet[2669]: E0508 05:21:15.116017 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.116495 kubelet[2669]: E0508 05:21:15.116445 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.116495 kubelet[2669]: W0508 05:21:15.116466 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.116495 kubelet[2669]: E0508 05:21:15.116492 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.116699 kubelet[2669]: E0508 05:21:15.116680 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.116699 kubelet[2669]: W0508 05:21:15.116690 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.116817 kubelet[2669]: E0508 05:21:15.116712 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.116969 kubelet[2669]: E0508 05:21:15.116924 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.116969 kubelet[2669]: W0508 05:21:15.116941 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.116969 kubelet[2669]: E0508 05:21:15.116958 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.117431 kubelet[2669]: E0508 05:21:15.117371 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.117431 kubelet[2669]: W0508 05:21:15.117381 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.117431 kubelet[2669]: E0508 05:21:15.117393 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.117654 kubelet[2669]: E0508 05:21:15.117606 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.117654 kubelet[2669]: W0508 05:21:15.117643 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.117783 kubelet[2669]: E0508 05:21:15.117687 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.117959 kubelet[2669]: E0508 05:21:15.117910 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.117959 kubelet[2669]: W0508 05:21:15.117927 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.118180 kubelet[2669]: E0508 05:21:15.118009 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.118180 kubelet[2669]: E0508 05:21:15.118171 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.118180 kubelet[2669]: W0508 05:21:15.118180 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.118358 kubelet[2669]: E0508 05:21:15.118194 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.118453 kubelet[2669]: E0508 05:21:15.118418 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.118453 kubelet[2669]: W0508 05:21:15.118433 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.118453 kubelet[2669]: E0508 05:21:15.118443 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:15.118763 kubelet[2669]: E0508 05:21:15.118736 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:15.118763 kubelet[2669]: W0508 05:21:15.118753 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:15.118763 kubelet[2669]: E0508 05:21:15.118763 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.067597 kubelet[2669]: I0508 05:21:16.066156 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:21:16.112833 kubelet[2669]: E0508 05:21:16.112766 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.113266 kubelet[2669]: W0508 05:21:16.113232 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.113518 kubelet[2669]: E0508 05:21:16.113484 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.114101 kubelet[2669]: E0508 05:21:16.114073 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.114247 kubelet[2669]: W0508 05:21:16.114222 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.114590 kubelet[2669]: E0508 05:21:16.114374 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.114940 kubelet[2669]: E0508 05:21:16.114911 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.115340 kubelet[2669]: W0508 05:21:16.115134 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.115340 kubelet[2669]: E0508 05:21:16.115172 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.116147 kubelet[2669]: E0508 05:21:16.115834 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.116147 kubelet[2669]: W0508 05:21:16.115904 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.116147 kubelet[2669]: E0508 05:21:16.115933 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.116876 kubelet[2669]: E0508 05:21:16.116662 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.116876 kubelet[2669]: W0508 05:21:16.116690 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.116876 kubelet[2669]: E0508 05:21:16.116712 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.117753 kubelet[2669]: E0508 05:21:16.117535 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.117753 kubelet[2669]: W0508 05:21:16.117559 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.117753 kubelet[2669]: E0508 05:21:16.117582 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.118209 kubelet[2669]: E0508 05:21:16.118182 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.118438 kubelet[2669]: W0508 05:21:16.118321 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.118438 kubelet[2669]: E0508 05:21:16.118354 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.119375 kubelet[2669]: E0508 05:21:16.119263 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.119375 kubelet[2669]: W0508 05:21:16.119291 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.120101 kubelet[2669]: E0508 05:21:16.119544 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.120853 kubelet[2669]: E0508 05:21:16.120608 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.120853 kubelet[2669]: W0508 05:21:16.120635 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.120853 kubelet[2669]: E0508 05:21:16.120658 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.121315 kubelet[2669]: E0508 05:21:16.121287 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.121630 kubelet[2669]: W0508 05:21:16.121446 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.121630 kubelet[2669]: E0508 05:21:16.121481 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.122515 kubelet[2669]: E0508 05:21:16.122290 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.122515 kubelet[2669]: W0508 05:21:16.122318 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.122515 kubelet[2669]: E0508 05:21:16.122341 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.122883 kubelet[2669]: E0508 05:21:16.122856 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.123353 kubelet[2669]: W0508 05:21:16.123110 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.123353 kubelet[2669]: E0508 05:21:16.123143 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.123773 kubelet[2669]: E0508 05:21:16.123745 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.124189 kubelet[2669]: W0508 05:21:16.123896 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.124189 kubelet[2669]: E0508 05:21:16.123930 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.124806 kubelet[2669]: E0508 05:21:16.124777 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.125195 kubelet[2669]: W0508 05:21:16.124922 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.125195 kubelet[2669]: E0508 05:21:16.124955 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.125685 kubelet[2669]: E0508 05:21:16.125656 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.126244 kubelet[2669]: W0508 05:21:16.125817 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.126244 kubelet[2669]: E0508 05:21:16.125854 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.126631 kubelet[2669]: E0508 05:21:16.126603 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.126778 kubelet[2669]: W0508 05:21:16.126753 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.126933 kubelet[2669]: E0508 05:21:16.126906 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.127959 kubelet[2669]: E0508 05:21:16.127885 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.127959 kubelet[2669]: W0508 05:21:16.127949 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.128539 kubelet[2669]: E0508 05:21:16.128120 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.128686 kubelet[2669]: E0508 05:21:16.128644 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.128686 kubelet[2669]: W0508 05:21:16.128668 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.128839 kubelet[2669]: E0508 05:21:16.128709 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.129160 kubelet[2669]: E0508 05:21:16.129127 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.129160 kubelet[2669]: W0508 05:21:16.129157 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.129327 kubelet[2669]: E0508 05:21:16.129212 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.129638 kubelet[2669]: E0508 05:21:16.129604 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.129638 kubelet[2669]: W0508 05:21:16.129634 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.129817 kubelet[2669]: E0508 05:21:16.129670 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.130185 kubelet[2669]: E0508 05:21:16.130148 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.130185 kubelet[2669]: W0508 05:21:16.130180 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.131173 kubelet[2669]: E0508 05:21:16.130247 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.131173 kubelet[2669]: E0508 05:21:16.130509 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.131173 kubelet[2669]: W0508 05:21:16.130534 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.131173 kubelet[2669]: E0508 05:21:16.130768 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.131173 kubelet[2669]: E0508 05:21:16.130839 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.131173 kubelet[2669]: W0508 05:21:16.130859 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.131173 kubelet[2669]: E0508 05:21:16.130918 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.131615 kubelet[2669]: E0508 05:21:16.131304 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.131615 kubelet[2669]: W0508 05:21:16.131325 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.131615 kubelet[2669]: E0508 05:21:16.131378 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.131800 kubelet[2669]: E0508 05:21:16.131748 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.131800 kubelet[2669]: W0508 05:21:16.131769 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.131917 kubelet[2669]: E0508 05:21:16.131797 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.132290 kubelet[2669]: E0508 05:21:16.132258 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.132290 kubelet[2669]: W0508 05:21:16.132287 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.132452 kubelet[2669]: E0508 05:21:16.132420 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.133237 kubelet[2669]: E0508 05:21:16.133203 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.133237 kubelet[2669]: W0508 05:21:16.133232 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.133679 kubelet[2669]: E0508 05:21:16.133487 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.133679 kubelet[2669]: E0508 05:21:16.133566 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.133679 kubelet[2669]: W0508 05:21:16.133587 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.133679 kubelet[2669]: E0508 05:21:16.133618 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.134301 kubelet[2669]: E0508 05:21:16.133900 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.134301 kubelet[2669]: W0508 05:21:16.133921 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.134301 kubelet[2669]: E0508 05:21:16.133955 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.134524 kubelet[2669]: E0508 05:21:16.134412 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.134524 kubelet[2669]: W0508 05:21:16.134434 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.134524 kubelet[2669]: E0508 05:21:16.134458 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.134811 kubelet[2669]: E0508 05:21:16.134767 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.134811 kubelet[2669]: W0508 05:21:16.134795 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.134963 kubelet[2669]: E0508 05:21:16.134816 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.135265 kubelet[2669]: E0508 05:21:16.135232 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.135265 kubelet[2669]: W0508 05:21:16.135262 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.135487 kubelet[2669]: E0508 05:21:16.135283 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.136188 kubelet[2669]: E0508 05:21:16.136152 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 8 05:21:16.136188 kubelet[2669]: W0508 05:21:16.136183 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 8 05:21:16.136355 kubelet[2669]: E0508 05:21:16.136207 2669 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 8 05:21:16.925615 kubelet[2669]: E0508 05:21:16.925270 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgt7d" podUID="752f8b41-8238-4fc9-90e8-01e9c2d7826e" May 8 05:21:16.932339 containerd[1462]: time="2025-05-08T05:21:16.932302970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:16.933623 containerd[1462]: time="2025-05-08T05:21:16.933470248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 8 05:21:16.934831 containerd[1462]: time="2025-05-08T05:21:16.934787758Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:16.938013 containerd[1462]: time="2025-05-08T05:21:16.937478053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:16.938444 containerd[1462]: time="2025-05-08T05:21:16.938286188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.412045703s" May 8 05:21:16.938444 containerd[1462]: time="2025-05-08T05:21:16.938323568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 8 05:21:16.944547 containerd[1462]: time="2025-05-08T05:21:16.941779306Z" level=info msg="CreateContainer within sandbox \"748e390f22c6020284f59208e7b88919c70a70bbf45b125035371701bbce37b8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 8 05:21:16.964220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount181654572.mount: Deactivated successfully. May 8 05:21:16.972130 containerd[1462]: time="2025-05-08T05:21:16.972092988Z" level=info msg="CreateContainer within sandbox \"748e390f22c6020284f59208e7b88919c70a70bbf45b125035371701bbce37b8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ffd93d5fc86f784d6cb95a480520e542c2141c87333d008193dc7265f8a9fb3b\"" May 8 05:21:16.972984 containerd[1462]: time="2025-05-08T05:21:16.972879643Z" level=info msg="StartContainer for \"ffd93d5fc86f784d6cb95a480520e542c2141c87333d008193dc7265f8a9fb3b\"" May 8 05:21:17.011143 systemd[1]: Started cri-containerd-ffd93d5fc86f784d6cb95a480520e542c2141c87333d008193dc7265f8a9fb3b.scope - libcontainer container ffd93d5fc86f784d6cb95a480520e542c2141c87333d008193dc7265f8a9fb3b. May 8 05:21:17.046529 containerd[1462]: time="2025-05-08T05:21:17.046378791Z" level=info msg="StartContainer for \"ffd93d5fc86f784d6cb95a480520e542c2141c87333d008193dc7265f8a9fb3b\" returns successfully" May 8 05:21:17.051324 systemd[1]: cri-containerd-ffd93d5fc86f784d6cb95a480520e542c2141c87333d008193dc7265f8a9fb3b.scope: Deactivated successfully. May 8 05:21:17.085752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffd93d5fc86f784d6cb95a480520e542c2141c87333d008193dc7265f8a9fb3b-rootfs.mount: Deactivated successfully. May 8 05:21:17.702694 containerd[1462]: time="2025-05-08T05:21:17.702513053Z" level=info msg="shim disconnected" id=ffd93d5fc86f784d6cb95a480520e542c2141c87333d008193dc7265f8a9fb3b namespace=k8s.io May 8 05:21:17.703423 containerd[1462]: time="2025-05-08T05:21:17.703115792Z" level=warning msg="cleaning up after shim disconnected" id=ffd93d5fc86f784d6cb95a480520e542c2141c87333d008193dc7265f8a9fb3b namespace=k8s.io May 8 05:21:17.703423 containerd[1462]: time="2025-05-08T05:21:17.703157982Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:21:18.084088 containerd[1462]: time="2025-05-08T05:21:18.083745544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 8 05:21:18.928600 kubelet[2669]: E0508 05:21:18.927895 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgt7d" podUID="752f8b41-8238-4fc9-90e8-01e9c2d7826e" May 8 05:21:20.927803 kubelet[2669]: E0508 05:21:20.926600 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgt7d" podUID="752f8b41-8238-4fc9-90e8-01e9c2d7826e" May 8 05:21:22.926851 kubelet[2669]: E0508 05:21:22.925342 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgt7d" podUID="752f8b41-8238-4fc9-90e8-01e9c2d7826e" May 8 05:21:24.418629 containerd[1462]: time="2025-05-08T05:21:24.418577645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:24.420911 containerd[1462]: time="2025-05-08T05:21:24.420860636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 8 05:21:24.422179 containerd[1462]: time="2025-05-08T05:21:24.422134855Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:24.428572 containerd[1462]: time="2025-05-08T05:21:24.428499799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:24.430737 containerd[1462]: time="2025-05-08T05:21:24.430392539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.34657517s" May 8 05:21:24.430737 containerd[1462]: time="2025-05-08T05:21:24.430426112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 8 05:21:24.437774 containerd[1462]: time="2025-05-08T05:21:24.437741489Z" level=info msg="CreateContainer within sandbox \"748e390f22c6020284f59208e7b88919c70a70bbf45b125035371701bbce37b8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 8 05:21:24.469035 containerd[1462]: time="2025-05-08T05:21:24.468999870Z" level=info msg="CreateContainer within sandbox \"748e390f22c6020284f59208e7b88919c70a70bbf45b125035371701bbce37b8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"def4423288a0939f6f51996b87030a37cc67fb6390deb01b0ff5303c1ecfd7ae\"" May 8 05:21:24.472303 containerd[1462]: time="2025-05-08T05:21:24.470932814Z" level=info msg="StartContainer for \"def4423288a0939f6f51996b87030a37cc67fb6390deb01b0ff5303c1ecfd7ae\"" May 8 05:21:24.518248 systemd[1]: Started cri-containerd-def4423288a0939f6f51996b87030a37cc67fb6390deb01b0ff5303c1ecfd7ae.scope - libcontainer container def4423288a0939f6f51996b87030a37cc67fb6390deb01b0ff5303c1ecfd7ae. May 8 05:21:24.555725 containerd[1462]: time="2025-05-08T05:21:24.555677066Z" level=info msg="StartContainer for \"def4423288a0939f6f51996b87030a37cc67fb6390deb01b0ff5303c1ecfd7ae\" returns successfully" May 8 05:21:24.927026 kubelet[2669]: E0508 05:21:24.925340 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgt7d" podUID="752f8b41-8238-4fc9-90e8-01e9c2d7826e" May 8 05:21:25.702652 containerd[1462]: time="2025-05-08T05:21:25.702568995Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 05:21:25.705560 systemd[1]: cri-containerd-def4423288a0939f6f51996b87030a37cc67fb6390deb01b0ff5303c1ecfd7ae.scope: Deactivated successfully. May 8 05:21:25.748907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-def4423288a0939f6f51996b87030a37cc67fb6390deb01b0ff5303c1ecfd7ae-rootfs.mount: Deactivated successfully. May 8 05:21:25.775395 kubelet[2669]: I0508 05:21:25.775312 2669 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 05:21:26.126144 kubelet[2669]: I0508 05:21:26.125440 2669 topology_manager.go:215] "Topology Admit Handler" podUID="822354fe-2362-41e4-8ca0-056e0fecd083" podNamespace="calico-system" podName="calico-kube-controllers-59b8776755-76bww" May 8 05:21:26.145301 systemd[1]: Created slice kubepods-besteffort-pod822354fe_2362_41e4_8ca0_056e0fecd083.slice - libcontainer container kubepods-besteffort-pod822354fe_2362_41e4_8ca0_056e0fecd083.slice. May 8 05:21:26.281048 kubelet[2669]: I0508 05:21:26.279396 2669 topology_manager.go:215] "Topology Admit Handler" podUID="92de57fd-57ee-4cd6-9f52-21d9887e2fee" podNamespace="kube-system" podName="coredns-7db6d8ff4d-k8p4k" May 8 05:21:26.287565 kubelet[2669]: I0508 05:21:26.286885 2669 topology_manager.go:215] "Topology Admit Handler" podUID="4957b03e-c51d-4c66-82a2-0815615ec4ed" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5mj5k" May 8 05:21:26.291368 kubelet[2669]: I0508 05:21:26.290686 2669 topology_manager.go:215] "Topology Admit Handler" podUID="1e517a02-62e9-4463-be28-71db2b7410c2" podNamespace="calico-apiserver" podName="calico-apiserver-564bcb876c-d22nw" May 8 05:21:26.294523 kubelet[2669]: I0508 05:21:26.294211 2669 topology_manager.go:215] "Topology Admit Handler" podUID="ada9b516-fc2d-475c-96a6-11055bc82985" podNamespace="calico-apiserver" podName="calico-apiserver-564bcb876c-l5rx9" May 8 05:21:26.314608 systemd[1]: Created slice kubepods-burstable-pod92de57fd_57ee_4cd6_9f52_21d9887e2fee.slice - libcontainer container kubepods-burstable-pod92de57fd_57ee_4cd6_9f52_21d9887e2fee.slice. May 8 05:21:26.318085 kubelet[2669]: I0508 05:21:26.317814 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/822354fe-2362-41e4-8ca0-056e0fecd083-tigera-ca-bundle\") pod \"calico-kube-controllers-59b8776755-76bww\" (UID: \"822354fe-2362-41e4-8ca0-056e0fecd083\") " pod="calico-system/calico-kube-controllers-59b8776755-76bww" May 8 05:21:26.318085 kubelet[2669]: I0508 05:21:26.317918 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2q48\" (UniqueName: \"kubernetes.io/projected/822354fe-2362-41e4-8ca0-056e0fecd083-kube-api-access-q2q48\") pod \"calico-kube-controllers-59b8776755-76bww\" (UID: \"822354fe-2362-41e4-8ca0-056e0fecd083\") " pod="calico-system/calico-kube-controllers-59b8776755-76bww" May 8 05:21:26.339421 systemd[1]: Created slice kubepods-burstable-pod4957b03e_c51d_4c66_82a2_0815615ec4ed.slice - libcontainer container kubepods-burstable-pod4957b03e_c51d_4c66_82a2_0815615ec4ed.slice. May 8 05:21:26.348326 systemd[1]: Created slice kubepods-besteffort-pod1e517a02_62e9_4463_be28_71db2b7410c2.slice - libcontainer container kubepods-besteffort-pod1e517a02_62e9_4463_be28_71db2b7410c2.slice. May 8 05:21:26.354239 systemd[1]: Created slice kubepods-besteffort-podada9b516_fc2d_475c_96a6_11055bc82985.slice - libcontainer container kubepods-besteffort-podada9b516_fc2d_475c_96a6_11055bc82985.slice. May 8 05:21:26.419019 kubelet[2669]: I0508 05:21:26.418859 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ada9b516-fc2d-475c-96a6-11055bc82985-calico-apiserver-certs\") pod \"calico-apiserver-564bcb876c-l5rx9\" (UID: \"ada9b516-fc2d-475c-96a6-11055bc82985\") " pod="calico-apiserver/calico-apiserver-564bcb876c-l5rx9" May 8 05:21:26.419019 kubelet[2669]: I0508 05:21:26.418963 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4957b03e-c51d-4c66-82a2-0815615ec4ed-config-volume\") pod \"coredns-7db6d8ff4d-5mj5k\" (UID: \"4957b03e-c51d-4c66-82a2-0815615ec4ed\") " pod="kube-system/coredns-7db6d8ff4d-5mj5k" May 8 05:21:26.419180 kubelet[2669]: I0508 05:21:26.419067 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xsjl\" (UniqueName: \"kubernetes.io/projected/1e517a02-62e9-4463-be28-71db2b7410c2-kube-api-access-2xsjl\") pod \"calico-apiserver-564bcb876c-d22nw\" (UID: \"1e517a02-62e9-4463-be28-71db2b7410c2\") " pod="calico-apiserver/calico-apiserver-564bcb876c-d22nw" May 8 05:21:26.419180 kubelet[2669]: I0508 05:21:26.419119 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bt5h\" (UniqueName: \"kubernetes.io/projected/ada9b516-fc2d-475c-96a6-11055bc82985-kube-api-access-4bt5h\") pod \"calico-apiserver-564bcb876c-l5rx9\" (UID: \"ada9b516-fc2d-475c-96a6-11055bc82985\") " pod="calico-apiserver/calico-apiserver-564bcb876c-l5rx9" May 8 05:21:26.419261 kubelet[2669]: I0508 05:21:26.419198 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92de57fd-57ee-4cd6-9f52-21d9887e2fee-config-volume\") pod \"coredns-7db6d8ff4d-k8p4k\" (UID: \"92de57fd-57ee-4cd6-9f52-21d9887e2fee\") " pod="kube-system/coredns-7db6d8ff4d-k8p4k" May 8 05:21:26.419307 kubelet[2669]: I0508 05:21:26.419276 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g746m\" (UniqueName: \"kubernetes.io/projected/4957b03e-c51d-4c66-82a2-0815615ec4ed-kube-api-access-g746m\") pod \"coredns-7db6d8ff4d-5mj5k\" (UID: \"4957b03e-c51d-4c66-82a2-0815615ec4ed\") " pod="kube-system/coredns-7db6d8ff4d-5mj5k" May 8 05:21:26.419350 kubelet[2669]: I0508 05:21:26.419321 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkl4m\" (UniqueName: \"kubernetes.io/projected/92de57fd-57ee-4cd6-9f52-21d9887e2fee-kube-api-access-gkl4m\") pod \"coredns-7db6d8ff4d-k8p4k\" (UID: \"92de57fd-57ee-4cd6-9f52-21d9887e2fee\") " pod="kube-system/coredns-7db6d8ff4d-k8p4k" May 8 05:21:26.419398 kubelet[2669]: I0508 05:21:26.419369 2669 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e517a02-62e9-4463-be28-71db2b7410c2-calico-apiserver-certs\") pod \"calico-apiserver-564bcb876c-d22nw\" (UID: \"1e517a02-62e9-4463-be28-71db2b7410c2\") " pod="calico-apiserver/calico-apiserver-564bcb876c-d22nw" May 8 05:21:26.656148 containerd[1462]: time="2025-05-08T05:21:26.653632431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564bcb876c-d22nw,Uid:1e517a02-62e9-4463-be28-71db2b7410c2,Namespace:calico-apiserver,Attempt:0,}" May 8 05:21:26.682604 containerd[1462]: time="2025-05-08T05:21:26.682321236Z" level=info msg="shim disconnected" id=def4423288a0939f6f51996b87030a37cc67fb6390deb01b0ff5303c1ecfd7ae namespace=k8s.io May 8 05:21:26.682922 containerd[1462]: time="2025-05-08T05:21:26.682902476Z" level=warning msg="cleaning up after shim disconnected" id=def4423288a0939f6f51996b87030a37cc67fb6390deb01b0ff5303c1ecfd7ae namespace=k8s.io May 8 05:21:26.683185 containerd[1462]: time="2025-05-08T05:21:26.683084067Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 05:21:26.752322 containerd[1462]: time="2025-05-08T05:21:26.752283895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b8776755-76bww,Uid:822354fe-2362-41e4-8ca0-056e0fecd083,Namespace:calico-system,Attempt:0,}" May 8 05:21:26.806030 containerd[1462]: time="2025-05-08T05:21:26.805958292Z" level=error msg="Failed to destroy network for sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:26.806454 containerd[1462]: time="2025-05-08T05:21:26.806392135Z" level=error msg="encountered an error cleaning up failed sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:26.806500 containerd[1462]: time="2025-05-08T05:21:26.806461746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564bcb876c-d22nw,Uid:1e517a02-62e9-4463-be28-71db2b7410c2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:26.809698 kubelet[2669]: E0508 05:21:26.806754 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:26.809698 kubelet[2669]: E0508 05:21:26.806821 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564bcb876c-d22nw" May 8 05:21:26.809698 kubelet[2669]: E0508 05:21:26.806844 2669 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564bcb876c-d22nw" May 8 05:21:26.809822 kubelet[2669]: E0508 05:21:26.806887 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564bcb876c-d22nw_calico-apiserver(1e517a02-62e9-4463-be28-71db2b7410c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564bcb876c-d22nw_calico-apiserver(1e517a02-62e9-4463-be28-71db2b7410c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564bcb876c-d22nw" podUID="1e517a02-62e9-4463-be28-71db2b7410c2" May 8 05:21:26.810961 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280-shm.mount: Deactivated successfully. May 8 05:21:26.849490 containerd[1462]: time="2025-05-08T05:21:26.849377098Z" level=error msg="Failed to destroy network for sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:26.851641 containerd[1462]: time="2025-05-08T05:21:26.849793579Z" level=error msg="encountered an error cleaning up failed sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:26.851641 containerd[1462]: time="2025-05-08T05:21:26.849850496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b8776755-76bww,Uid:822354fe-2362-41e4-8ca0-056e0fecd083,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:26.851763 kubelet[2669]: E0508 05:21:26.850050 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:26.851763 kubelet[2669]: E0508 05:21:26.850103 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59b8776755-76bww" May 8 05:21:26.851763 kubelet[2669]: E0508 05:21:26.850130 2669 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59b8776755-76bww" May 8 05:21:26.851865 kubelet[2669]: E0508 05:21:26.850173 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59b8776755-76bww_calico-system(822354fe-2362-41e4-8ca0-056e0fecd083)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59b8776755-76bww_calico-system(822354fe-2362-41e4-8ca0-056e0fecd083)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59b8776755-76bww" podUID="822354fe-2362-41e4-8ca0-056e0fecd083" May 8 05:21:26.852663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606-shm.mount: Deactivated successfully. May 8 05:21:26.932708 containerd[1462]: time="2025-05-08T05:21:26.932578842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k8p4k,Uid:92de57fd-57ee-4cd6-9f52-21d9887e2fee,Namespace:kube-system,Attempt:0,}" May 8 05:21:26.937237 systemd[1]: Created slice kubepods-besteffort-pod752f8b41_8238_4fc9_90e8_01e9c2d7826e.slice - libcontainer container kubepods-besteffort-pod752f8b41_8238_4fc9_90e8_01e9c2d7826e.slice. May 8 05:21:26.940856 containerd[1462]: time="2025-05-08T05:21:26.940805027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgt7d,Uid:752f8b41-8238-4fc9-90e8-01e9c2d7826e,Namespace:calico-system,Attempt:0,}" May 8 05:21:26.949108 containerd[1462]: time="2025-05-08T05:21:26.949059776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5mj5k,Uid:4957b03e-c51d-4c66-82a2-0815615ec4ed,Namespace:kube-system,Attempt:0,}" May 8 05:21:26.957623 containerd[1462]: time="2025-05-08T05:21:26.957571405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564bcb876c-l5rx9,Uid:ada9b516-fc2d-475c-96a6-11055bc82985,Namespace:calico-apiserver,Attempt:0,}" May 8 05:21:27.105046 containerd[1462]: time="2025-05-08T05:21:27.104393458Z" level=error msg="Failed to destroy network for sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.105862 containerd[1462]: time="2025-05-08T05:21:27.105799385Z" level=error msg="encountered an error cleaning up failed sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.105914 containerd[1462]: time="2025-05-08T05:21:27.105866481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k8p4k,Uid:92de57fd-57ee-4cd6-9f52-21d9887e2fee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.106204 kubelet[2669]: E0508 05:21:27.106121 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.106359 kubelet[2669]: E0508 05:21:27.106339 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-k8p4k" May 8 05:21:27.106436 kubelet[2669]: E0508 05:21:27.106420 2669 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-k8p4k" May 8 05:21:27.106539 kubelet[2669]: E0508 05:21:27.106514 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-k8p4k_kube-system(92de57fd-57ee-4cd6-9f52-21d9887e2fee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-k8p4k_kube-system(92de57fd-57ee-4cd6-9f52-21d9887e2fee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k8p4k" podUID="92de57fd-57ee-4cd6-9f52-21d9887e2fee" May 8 05:21:27.115311 kubelet[2669]: I0508 05:21:27.115277 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:27.116913 containerd[1462]: time="2025-05-08T05:21:27.116690748Z" level=info msg="StopPodSandbox for \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\"" May 8 05:21:27.116913 containerd[1462]: time="2025-05-08T05:21:27.116860917Z" level=info msg="Ensure that sandbox d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606 in task-service has been cleanup successfully" May 8 05:21:27.117228 kubelet[2669]: I0508 05:21:27.117208 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:27.118742 containerd[1462]: time="2025-05-08T05:21:27.118605428Z" level=info msg="StopPodSandbox for \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\"" May 8 05:21:27.121857 containerd[1462]: time="2025-05-08T05:21:27.121359862Z" level=info msg="Ensure that sandbox 6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280 in task-service has been cleanup successfully" May 8 05:21:27.133156 kubelet[2669]: I0508 05:21:27.133131 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:27.134159 containerd[1462]: time="2025-05-08T05:21:27.133793878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 8 05:21:27.135826 containerd[1462]: time="2025-05-08T05:21:27.135521227Z" level=info msg="StopPodSandbox for \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\"" May 8 05:21:27.137702 containerd[1462]: time="2025-05-08T05:21:27.137163547Z" level=info msg="Ensure that sandbox 8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272 in task-service has been cleanup successfully" May 8 05:21:27.151158 containerd[1462]: time="2025-05-08T05:21:27.151115379Z" level=error msg="Failed to destroy network for sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.151900 containerd[1462]: time="2025-05-08T05:21:27.151745721Z" level=error msg="encountered an error cleaning up failed sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.151900 containerd[1462]: time="2025-05-08T05:21:27.151796096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgt7d,Uid:752f8b41-8238-4fc9-90e8-01e9c2d7826e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.152910 kubelet[2669]: E0508 05:21:27.152112 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.152910 kubelet[2669]: E0508 05:21:27.152169 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgt7d" May 8 05:21:27.152910 kubelet[2669]: E0508 05:21:27.152192 2669 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgt7d" May 8 05:21:27.153169 kubelet[2669]: E0508 05:21:27.153099 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lgt7d_calico-system(752f8b41-8238-4fc9-90e8-01e9c2d7826e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lgt7d_calico-system(752f8b41-8238-4fc9-90e8-01e9c2d7826e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgt7d" podUID="752f8b41-8238-4fc9-90e8-01e9c2d7826e" May 8 05:21:27.192157 containerd[1462]: time="2025-05-08T05:21:27.192098358Z" level=error msg="Failed to destroy network for sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.192671 containerd[1462]: time="2025-05-08T05:21:27.192646486Z" level=error msg="encountered an error cleaning up failed sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.192876 containerd[1462]: time="2025-05-08T05:21:27.192844006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5mj5k,Uid:4957b03e-c51d-4c66-82a2-0815615ec4ed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.193527 kubelet[2669]: E0508 05:21:27.193183 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.193527 kubelet[2669]: E0508 05:21:27.193235 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5mj5k" May 8 05:21:27.193527 kubelet[2669]: E0508 05:21:27.193258 2669 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5mj5k" May 8 05:21:27.193642 kubelet[2669]: E0508 05:21:27.193296 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5mj5k_kube-system(4957b03e-c51d-4c66-82a2-0815615ec4ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5mj5k_kube-system(4957b03e-c51d-4c66-82a2-0815615ec4ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5mj5k" podUID="4957b03e-c51d-4c66-82a2-0815615ec4ed" May 8 05:21:27.208261 containerd[1462]: time="2025-05-08T05:21:27.208216382Z" level=error msg="Failed to destroy network for sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.208668 containerd[1462]: time="2025-05-08T05:21:27.208489063Z" level=error msg="StopPodSandbox for \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\" failed" error="failed to destroy network for sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.208873 kubelet[2669]: E0508 05:21:27.208835 2669 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:27.209201 kubelet[2669]: E0508 05:21:27.209070 2669 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606"} May 8 05:21:27.209201 kubelet[2669]: E0508 05:21:27.209140 2669 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"822354fe-2362-41e4-8ca0-056e0fecd083\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:21:27.209201 kubelet[2669]: E0508 05:21:27.209169 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"822354fe-2362-41e4-8ca0-056e0fecd083\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59b8776755-76bww" podUID="822354fe-2362-41e4-8ca0-056e0fecd083" May 8 05:21:27.210129 containerd[1462]: time="2025-05-08T05:21:27.210021817Z" level=error msg="encountered an error cleaning up failed sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.210129 containerd[1462]: time="2025-05-08T05:21:27.210083763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564bcb876c-l5rx9,Uid:ada9b516-fc2d-475c-96a6-11055bc82985,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.211036 kubelet[2669]: E0508 05:21:27.210490 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.211036 kubelet[2669]: E0508 05:21:27.210565 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564bcb876c-l5rx9" May 8 05:21:27.211036 kubelet[2669]: E0508 05:21:27.210782 2669 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-564bcb876c-l5rx9" May 8 05:21:27.211141 kubelet[2669]: E0508 05:21:27.210832 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-564bcb876c-l5rx9_calico-apiserver(ada9b516-fc2d-475c-96a6-11055bc82985)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-564bcb876c-l5rx9_calico-apiserver(ada9b516-fc2d-475c-96a6-11055bc82985)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564bcb876c-l5rx9" podUID="ada9b516-fc2d-475c-96a6-11055bc82985" May 8 05:21:27.223784 containerd[1462]: time="2025-05-08T05:21:27.223740982Z" level=error msg="StopPodSandbox for \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\" failed" error="failed to destroy network for sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.224272 kubelet[2669]: E0508 05:21:27.224222 2669 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:27.224338 kubelet[2669]: E0508 05:21:27.224276 2669 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280"} May 8 05:21:27.224338 kubelet[2669]: E0508 05:21:27.224323 2669 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e517a02-62e9-4463-be28-71db2b7410c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:21:27.224429 kubelet[2669]: E0508 05:21:27.224384 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e517a02-62e9-4463-be28-71db2b7410c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564bcb876c-d22nw" podUID="1e517a02-62e9-4463-be28-71db2b7410c2" May 8 05:21:27.229677 containerd[1462]: time="2025-05-08T05:21:27.229580452Z" level=error msg="StopPodSandbox for \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\" failed" error="failed to destroy network for sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:27.229840 kubelet[2669]: E0508 05:21:27.229779 2669 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:27.229888 kubelet[2669]: E0508 05:21:27.229839 2669 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272"} May 8 05:21:27.229888 kubelet[2669]: E0508 05:21:27.229871 2669 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92de57fd-57ee-4cd6-9f52-21d9887e2fee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:21:27.230009 kubelet[2669]: E0508 05:21:27.229894 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92de57fd-57ee-4cd6-9f52-21d9887e2fee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-k8p4k" podUID="92de57fd-57ee-4cd6-9f52-21d9887e2fee" May 8 05:21:28.138998 kubelet[2669]: I0508 05:21:28.138864 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:28.142044 containerd[1462]: time="2025-05-08T05:21:28.141615108Z" level=info msg="StopPodSandbox for \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\"" May 8 05:21:28.144249 kubelet[2669]: I0508 05:21:28.144019 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:28.144335 containerd[1462]: time="2025-05-08T05:21:28.143921032Z" level=info msg="Ensure that sandbox fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113 in task-service has been cleanup successfully" May 8 05:21:28.146240 containerd[1462]: time="2025-05-08T05:21:28.146117371Z" level=info msg="StopPodSandbox for \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\"" May 8 05:21:28.148068 containerd[1462]: time="2025-05-08T05:21:28.147757436Z" level=info msg="Ensure that sandbox 7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb in task-service has been cleanup successfully" May 8 05:21:28.153595 kubelet[2669]: I0508 05:21:28.153202 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:28.158116 containerd[1462]: time="2025-05-08T05:21:28.157114091Z" level=info msg="StopPodSandbox for \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\"" May 8 05:21:28.158116 containerd[1462]: time="2025-05-08T05:21:28.157459218Z" level=info msg="Ensure that sandbox 62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e in task-service has been cleanup successfully" May 8 05:21:28.231550 containerd[1462]: time="2025-05-08T05:21:28.231231103Z" level=error msg="StopPodSandbox for \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\" failed" error="failed to destroy network for sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:28.231965 kubelet[2669]: E0508 05:21:28.231918 2669 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:28.232048 kubelet[2669]: E0508 05:21:28.231992 2669 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113"} May 8 05:21:28.232048 kubelet[2669]: E0508 05:21:28.232031 2669 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4957b03e-c51d-4c66-82a2-0815615ec4ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:21:28.232143 kubelet[2669]: E0508 05:21:28.232057 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4957b03e-c51d-4c66-82a2-0815615ec4ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5mj5k" podUID="4957b03e-c51d-4c66-82a2-0815615ec4ed" May 8 05:21:28.240499 containerd[1462]: time="2025-05-08T05:21:28.240136312Z" level=error msg="StopPodSandbox for \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\" failed" error="failed to destroy network for sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:28.241346 containerd[1462]: time="2025-05-08T05:21:28.241206980Z" level=error msg="StopPodSandbox for \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\" failed" error="failed to destroy network for sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 8 05:21:28.243902 kubelet[2669]: E0508 05:21:28.243856 2669 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:28.243964 kubelet[2669]: E0508 05:21:28.243917 2669 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e"} May 8 05:21:28.243964 kubelet[2669]: E0508 05:21:28.243955 2669 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ada9b516-fc2d-475c-96a6-11055bc82985\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:21:28.244084 kubelet[2669]: E0508 05:21:28.244053 2669 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:28.244084 kubelet[2669]: E0508 05:21:28.244078 2669 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb"} May 8 05:21:28.244146 kubelet[2669]: E0508 05:21:28.244109 2669 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"752f8b41-8238-4fc9-90e8-01e9c2d7826e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 8 05:21:28.244195 kubelet[2669]: E0508 05:21:28.244136 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"752f8b41-8238-4fc9-90e8-01e9c2d7826e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgt7d" podUID="752f8b41-8238-4fc9-90e8-01e9c2d7826e" May 8 05:21:28.246725 kubelet[2669]: E0508 05:21:28.246033 2669 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ada9b516-fc2d-475c-96a6-11055bc82985\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-564bcb876c-l5rx9" podUID="ada9b516-fc2d-475c-96a6-11055bc82985" May 8 05:21:36.019629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3840630327.mount: Deactivated successfully. May 8 05:21:36.070650 containerd[1462]: time="2025-05-08T05:21:36.070548999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:36.072820 containerd[1462]: time="2025-05-08T05:21:36.072771386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 8 05:21:36.074394 containerd[1462]: time="2025-05-08T05:21:36.074340930Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:36.078169 containerd[1462]: time="2025-05-08T05:21:36.077835843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:36.078729 containerd[1462]: time="2025-05-08T05:21:36.078428104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.944561439s" May 8 05:21:36.078729 containerd[1462]: time="2025-05-08T05:21:36.078523283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 8 05:21:36.100757 containerd[1462]: time="2025-05-08T05:21:36.099959001Z" level=info msg="CreateContainer within sandbox \"748e390f22c6020284f59208e7b88919c70a70bbf45b125035371701bbce37b8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 8 05:21:36.127618 containerd[1462]: time="2025-05-08T05:21:36.127507293Z" level=info msg="CreateContainer within sandbox \"748e390f22c6020284f59208e7b88919c70a70bbf45b125035371701bbce37b8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2100e3a44064aaafc9a23628bd748e75d5f6d54fcfd7e5c2fe4971b7d727854f\"" May 8 05:21:36.130264 containerd[1462]: time="2025-05-08T05:21:36.128505354Z" level=info msg="StartContainer for \"2100e3a44064aaafc9a23628bd748e75d5f6d54fcfd7e5c2fe4971b7d727854f\"" May 8 05:21:36.167128 systemd[1]: Started cri-containerd-2100e3a44064aaafc9a23628bd748e75d5f6d54fcfd7e5c2fe4971b7d727854f.scope - libcontainer container 2100e3a44064aaafc9a23628bd748e75d5f6d54fcfd7e5c2fe4971b7d727854f. May 8 05:21:36.203195 containerd[1462]: time="2025-05-08T05:21:36.203154224Z" level=info msg="StartContainer for \"2100e3a44064aaafc9a23628bd748e75d5f6d54fcfd7e5c2fe4971b7d727854f\" returns successfully" May 8 05:21:36.276423 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 8 05:21:36.276547 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 8 05:21:36.600606 kubelet[2669]: I0508 05:21:36.599422 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:21:37.208531 kubelet[2669]: I0508 05:21:37.208255 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t42ml" podStartSLOduration=3.12255018 podStartE2EDuration="27.208237513s" podCreationTimestamp="2025-05-08 05:21:10 +0000 UTC" firstStartedPulling="2025-05-08 05:21:11.993907129 +0000 UTC m=+23.196937746" lastFinishedPulling="2025-05-08 05:21:36.079594412 +0000 UTC m=+47.282625079" observedRunningTime="2025-05-08 05:21:37.205754777 +0000 UTC m=+48.408785464" watchObservedRunningTime="2025-05-08 05:21:37.208237513 +0000 UTC m=+48.411268130" May 8 05:21:37.995003 kernel: bpftool[3926]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 8 05:21:38.255964 systemd-networkd[1382]: vxlan.calico: Link UP May 8 05:21:38.256120 systemd-networkd[1382]: vxlan.calico: Gained carrier May 8 05:21:38.927184 containerd[1462]: time="2025-05-08T05:21:38.926916641Z" level=info msg="StopPodSandbox for \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\"" May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.086 [INFO][4032] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.086 [INFO][4032] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" iface="eth0" netns="/var/run/netns/cni-6bd77eb3-5378-3a08-01c6-2b07dfc89ce2" May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.090 [INFO][4032] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" iface="eth0" netns="/var/run/netns/cni-6bd77eb3-5378-3a08-01c6-2b07dfc89ce2" May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.095 [INFO][4032] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" iface="eth0" netns="/var/run/netns/cni-6bd77eb3-5378-3a08-01c6-2b07dfc89ce2" May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.095 [INFO][4032] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.095 [INFO][4032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.153 [INFO][4039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" HandleID="k8s-pod-network.62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.153 [INFO][4039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.153 [INFO][4039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.168 [WARNING][4039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" HandleID="k8s-pod-network.62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.168 [INFO][4039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" HandleID="k8s-pod-network.62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.170 [INFO][4039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:39.182372 containerd[1462]: 2025-05-08 05:21:39.178 [INFO][4032] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:39.187048 containerd[1462]: time="2025-05-08T05:21:39.184689722Z" level=info msg="TearDown network for sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\" successfully" May 8 05:21:39.187048 containerd[1462]: time="2025-05-08T05:21:39.184712565Z" level=info msg="StopPodSandbox for \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\" returns successfully" May 8 05:21:39.186945 systemd[1]: run-netns-cni\x2d6bd77eb3\x2d5378\x2d3a08\x2d01c6\x2d2b07dfc89ce2.mount: Deactivated successfully. May 8 05:21:39.193588 containerd[1462]: time="2025-05-08T05:21:39.193391681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564bcb876c-l5rx9,Uid:ada9b516-fc2d-475c-96a6-11055bc82985,Namespace:calico-apiserver,Attempt:1,}" May 8 05:21:39.905256 systemd-networkd[1382]: cali8c9dbacfc1f: Link UP May 8 05:21:39.906043 systemd-networkd[1382]: cali8c9dbacfc1f: Gained carrier May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.416 [INFO][4050] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0 calico-apiserver-564bcb876c- calico-apiserver ada9b516-fc2d-475c-96a6-11055bc82985 755 0 2025-05-08 05:21:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:564bcb876c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-e0f469a76e.novalocal calico-apiserver-564bcb876c-l5rx9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8c9dbacfc1f [] []}} ContainerID="a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-l5rx9" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.434 [INFO][4050] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-l5rx9" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.737 [INFO][4059] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" HandleID="k8s-pod-network.a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.762 [INFO][4059] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" HandleID="k8s-pod-network.a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-e0f469a76e.novalocal", "pod":"calico-apiserver-564bcb876c-l5rx9", "timestamp":"2025-05-08 05:21:39.737257234 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e0f469a76e.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.762 [INFO][4059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.763 [INFO][4059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.763 [INFO][4059] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e0f469a76e.novalocal' May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.767 [INFO][4059] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.778 [INFO][4059] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.788 [INFO][4059] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.792 [INFO][4059] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.797 [INFO][4059] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.797 [INFO][4059] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.800 [INFO][4059] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5 May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.846 [INFO][4059] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.880 [INFO][4059] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.193/26] block=192.168.65.192/26 handle="k8s-pod-network.a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.880 [INFO][4059] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.193/26] handle="k8s-pod-network.a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.880 [INFO][4059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:40.061960 containerd[1462]: 2025-05-08 05:21:39.880 [INFO][4059] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.193/26] IPv6=[] ContainerID="a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" HandleID="k8s-pod-network.a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:40.067479 containerd[1462]: 2025-05-08 05:21:39.883 [INFO][4050] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-l5rx9" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0", GenerateName:"calico-apiserver-564bcb876c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ada9b516-fc2d-475c-96a6-11055bc82985", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564bcb876c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"", Pod:"calico-apiserver-564bcb876c-l5rx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c9dbacfc1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:40.067479 containerd[1462]: 2025-05-08 05:21:39.883 [INFO][4050] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.193/32] ContainerID="a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-l5rx9" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:40.067479 containerd[1462]: 2025-05-08 05:21:39.883 [INFO][4050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c9dbacfc1f ContainerID="a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-l5rx9" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:40.067479 containerd[1462]: 2025-05-08 05:21:39.928 [INFO][4050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-l5rx9" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:40.067479 containerd[1462]: 2025-05-08 05:21:39.928 [INFO][4050] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-l5rx9" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0", GenerateName:"calico-apiserver-564bcb876c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ada9b516-fc2d-475c-96a6-11055bc82985", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564bcb876c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5", Pod:"calico-apiserver-564bcb876c-l5rx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c9dbacfc1f", MAC:"5a:44:c8:0a:57:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:40.067479 containerd[1462]: 2025-05-08 05:21:40.052 [INFO][4050] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-l5rx9" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:40.122030 containerd[1462]: time="2025-05-08T05:21:40.121793019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:21:40.122030 containerd[1462]: time="2025-05-08T05:21:40.121892857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:21:40.123319 containerd[1462]: time="2025-05-08T05:21:40.121927031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:40.123451 containerd[1462]: time="2025-05-08T05:21:40.123250723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:40.157135 systemd[1]: Started cri-containerd-a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5.scope - libcontainer container a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5. May 8 05:21:40.199952 containerd[1462]: time="2025-05-08T05:21:40.199914825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564bcb876c-l5rx9,Uid:ada9b516-fc2d-475c-96a6-11055bc82985,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5\"" May 8 05:21:40.201870 containerd[1462]: time="2025-05-08T05:21:40.201678593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 05:21:40.203112 systemd-networkd[1382]: vxlan.calico: Gained IPv6LL May 8 05:21:40.929448 containerd[1462]: time="2025-05-08T05:21:40.928800668Z" level=info msg="StopPodSandbox for \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\"" May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.023 [INFO][4131] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.024 [INFO][4131] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" iface="eth0" netns="/var/run/netns/cni-8a7a1650-7971-9d9a-9188-2c9118700197" May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.024 [INFO][4131] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" iface="eth0" netns="/var/run/netns/cni-8a7a1650-7971-9d9a-9188-2c9118700197" May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.025 [INFO][4131] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" iface="eth0" netns="/var/run/netns/cni-8a7a1650-7971-9d9a-9188-2c9118700197" May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.025 [INFO][4131] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.026 [INFO][4131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.057 [INFO][4139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" HandleID="k8s-pod-network.fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.057 [INFO][4139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.057 [INFO][4139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.064 [WARNING][4139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" HandleID="k8s-pod-network.fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.065 [INFO][4139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" HandleID="k8s-pod-network.fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.066 [INFO][4139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:41.070677 containerd[1462]: 2025-05-08 05:21:41.068 [INFO][4131] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:41.073392 containerd[1462]: time="2025-05-08T05:21:41.070764581Z" level=info msg="TearDown network for sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\" successfully" May 8 05:21:41.073392 containerd[1462]: time="2025-05-08T05:21:41.071073089Z" level=info msg="StopPodSandbox for \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\" returns successfully" May 8 05:21:41.074737 containerd[1462]: time="2025-05-08T05:21:41.073775758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5mj5k,Uid:4957b03e-c51d-4c66-82a2-0815615ec4ed,Namespace:kube-system,Attempt:1,}" May 8 05:21:41.076333 systemd[1]: run-netns-cni\x2d8a7a1650\x2d7971\x2d9d9a\x2d9188\x2d2c9118700197.mount: Deactivated successfully. May 8 05:21:41.280455 systemd-networkd[1382]: caliab5c6df7d0c: Link UP May 8 05:21:41.284056 systemd-networkd[1382]: caliab5c6df7d0c: Gained carrier May 8 05:21:41.291098 systemd-networkd[1382]: cali8c9dbacfc1f: Gained IPv6LL May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.170 [INFO][4151] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0 coredns-7db6d8ff4d- kube-system 4957b03e-c51d-4c66-82a2-0815615ec4ed 765 0 2025-05-08 05:21:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-e0f469a76e.novalocal coredns-7db6d8ff4d-5mj5k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab5c6df7d0c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5mj5k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.171 [INFO][4151] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5mj5k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.214 [INFO][4159] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" HandleID="k8s-pod-network.35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.228 [INFO][4159] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" HandleID="k8s-pod-network.35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d710), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-e0f469a76e.novalocal", "pod":"coredns-7db6d8ff4d-5mj5k", "timestamp":"2025-05-08 05:21:41.214809176 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e0f469a76e.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.229 [INFO][4159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.229 [INFO][4159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.229 [INFO][4159] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e0f469a76e.novalocal' May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.231 [INFO][4159] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.237 [INFO][4159] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.243 [INFO][4159] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.246 [INFO][4159] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.249 [INFO][4159] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.249 [INFO][4159] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.252 [INFO][4159] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184 May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.258 [INFO][4159] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.271 [INFO][4159] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.194/26] block=192.168.65.192/26 handle="k8s-pod-network.35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.271 [INFO][4159] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.194/26] handle="k8s-pod-network.35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.271 [INFO][4159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:41.308882 containerd[1462]: 2025-05-08 05:21:41.271 [INFO][4159] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.194/26] IPv6=[] ContainerID="35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" HandleID="k8s-pod-network.35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:41.309788 containerd[1462]: 2025-05-08 05:21:41.274 [INFO][4151] cni-plugin/k8s.go 386: Populated endpoint ContainerID="35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5mj5k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4957b03e-c51d-4c66-82a2-0815615ec4ed", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-5mj5k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab5c6df7d0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:41.309788 containerd[1462]: 2025-05-08 05:21:41.274 [INFO][4151] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.194/32] ContainerID="35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5mj5k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:41.309788 containerd[1462]: 2025-05-08 05:21:41.274 [INFO][4151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab5c6df7d0c ContainerID="35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5mj5k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:41.309788 containerd[1462]: 2025-05-08 05:21:41.284 [INFO][4151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5mj5k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:41.309788 containerd[1462]: 2025-05-08 05:21:41.288 [INFO][4151] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5mj5k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4957b03e-c51d-4c66-82a2-0815615ec4ed", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184", Pod:"coredns-7db6d8ff4d-5mj5k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab5c6df7d0c", MAC:"ea:37:ad:d9:3c:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:41.309788 containerd[1462]: 2025-05-08 05:21:41.306 [INFO][4151] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5mj5k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:41.331584 containerd[1462]: time="2025-05-08T05:21:41.331403021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:21:41.332377 containerd[1462]: time="2025-05-08T05:21:41.332110117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:21:41.332377 containerd[1462]: time="2025-05-08T05:21:41.332155192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:41.332377 containerd[1462]: time="2025-05-08T05:21:41.332257864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:41.363121 systemd[1]: Started cri-containerd-35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184.scope - libcontainer container 35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184. May 8 05:21:41.410849 containerd[1462]: time="2025-05-08T05:21:41.410812092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5mj5k,Uid:4957b03e-c51d-4c66-82a2-0815615ec4ed,Namespace:kube-system,Attempt:1,} returns sandbox id \"35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184\"" May 8 05:21:41.416278 containerd[1462]: time="2025-05-08T05:21:41.416212088Z" level=info msg="CreateContainer within sandbox \"35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 05:21:41.458939 containerd[1462]: time="2025-05-08T05:21:41.458304086Z" level=info msg="CreateContainer within sandbox \"35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5c8d139e6876b43f422c040ab88e120d635f6166072aeb4953a47e34bd4de1a\"" May 8 05:21:41.460653 containerd[1462]: time="2025-05-08T05:21:41.460036024Z" level=info msg="StartContainer for \"a5c8d139e6876b43f422c040ab88e120d635f6166072aeb4953a47e34bd4de1a\"" May 8 05:21:41.497317 systemd[1]: Started cri-containerd-a5c8d139e6876b43f422c040ab88e120d635f6166072aeb4953a47e34bd4de1a.scope - libcontainer container a5c8d139e6876b43f422c040ab88e120d635f6166072aeb4953a47e34bd4de1a. May 8 05:21:41.567149 containerd[1462]: time="2025-05-08T05:21:41.567011612Z" level=info msg="StartContainer for \"a5c8d139e6876b43f422c040ab88e120d635f6166072aeb4953a47e34bd4de1a\" returns successfully" May 8 05:21:41.927597 containerd[1462]: time="2025-05-08T05:21:41.927253524Z" level=info msg="StopPodSandbox for \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\"" May 8 05:21:41.928576 containerd[1462]: time="2025-05-08T05:21:41.927369100Z" level=info msg="StopPodSandbox for \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\"" May 8 05:21:41.929433 containerd[1462]: time="2025-05-08T05:21:41.929118622Z" level=info msg="StopPodSandbox for \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\"" May 8 05:21:41.932593 containerd[1462]: time="2025-05-08T05:21:41.932222844Z" level=info msg="StopPodSandbox for \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\"" May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.121 [INFO][4308] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.122 [INFO][4308] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" iface="eth0" netns="/var/run/netns/cni-30d6e93d-3ec2-e6d0-ff89-b935bd4fd648" May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.123 [INFO][4308] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" iface="eth0" netns="/var/run/netns/cni-30d6e93d-3ec2-e6d0-ff89-b935bd4fd648" May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.125 [INFO][4308] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" iface="eth0" netns="/var/run/netns/cni-30d6e93d-3ec2-e6d0-ff89-b935bd4fd648" May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.125 [INFO][4308] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.125 [INFO][4308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.207 [INFO][4350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" HandleID="k8s-pod-network.7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.207 [INFO][4350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.208 [INFO][4350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.243 [WARNING][4350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" HandleID="k8s-pod-network.7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.243 [INFO][4350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" HandleID="k8s-pod-network.7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.246 [INFO][4350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:42.255050 containerd[1462]: 2025-05-08 05:21:42.250 [INFO][4308] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:42.265852 containerd[1462]: time="2025-05-08T05:21:42.263113344Z" level=info msg="TearDown network for sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\" successfully" May 8 05:21:42.265852 containerd[1462]: time="2025-05-08T05:21:42.263145544Z" level=info msg="StopPodSandbox for \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\" returns successfully" May 8 05:21:42.261474 systemd[1]: run-netns-cni\x2d30d6e93d\x2d3ec2\x2de6d0\x2dff89\x2db935bd4fd648.mount: Deactivated successfully. May 8 05:21:42.266741 kubelet[2669]: I0508 05:21:42.258672 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5mj5k" podStartSLOduration=38.258653039 podStartE2EDuration="38.258653039s" podCreationTimestamp="2025-05-08 05:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:21:42.251886881 +0000 UTC m=+53.454917508" watchObservedRunningTime="2025-05-08 05:21:42.258653039 +0000 UTC m=+53.461683657" May 8 05:21:42.273018 containerd[1462]: time="2025-05-08T05:21:42.270491480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgt7d,Uid:752f8b41-8238-4fc9-90e8-01e9c2d7826e,Namespace:calico-system,Attempt:1,}" May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.108 [INFO][4300] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.108 [INFO][4300] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" iface="eth0" netns="/var/run/netns/cni-70fd8b45-fce3-2b68-d201-08db55edcc73" May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.109 [INFO][4300] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" iface="eth0" netns="/var/run/netns/cni-70fd8b45-fce3-2b68-d201-08db55edcc73" May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.109 [INFO][4300] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" iface="eth0" netns="/var/run/netns/cni-70fd8b45-fce3-2b68-d201-08db55edcc73" May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.109 [INFO][4300] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.109 [INFO][4300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.214 [INFO][4339] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" HandleID="k8s-pod-network.8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.217 [INFO][4339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.246 [INFO][4339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.292 [WARNING][4339] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" HandleID="k8s-pod-network.8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.292 [INFO][4339] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" HandleID="k8s-pod-network.8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.316 [INFO][4339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:42.343224 containerd[1462]: 2025-05-08 05:21:42.323 [INFO][4300] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:42.350534 containerd[1462]: time="2025-05-08T05:21:42.348086620Z" level=info msg="TearDown network for sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\" successfully" May 8 05:21:42.350534 containerd[1462]: time="2025-05-08T05:21:42.348326831Z" level=info msg="StopPodSandbox for \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\" returns successfully" May 8 05:21:42.349191 systemd[1]: run-netns-cni\x2d70fd8b45\x2dfce3\x2d2b68\x2dd201\x2d08db55edcc73.mount: Deactivated successfully. May 8 05:21:42.350717 containerd[1462]: time="2025-05-08T05:21:42.350671477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k8p4k,Uid:92de57fd-57ee-4cd6-9f52-21d9887e2fee,Namespace:kube-system,Attempt:1,}" May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.135 [INFO][4313] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.136 [INFO][4313] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" iface="eth0" netns="/var/run/netns/cni-f6bf2d49-7bcf-da62-933b-354281a76fc6" May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.136 [INFO][4313] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" iface="eth0" netns="/var/run/netns/cni-f6bf2d49-7bcf-da62-933b-354281a76fc6" May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.137 [INFO][4313] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" iface="eth0" netns="/var/run/netns/cni-f6bf2d49-7bcf-da62-933b-354281a76fc6" May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.137 [INFO][4313] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.137 [INFO][4313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.232 [INFO][4355] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" HandleID="k8s-pod-network.6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.234 [INFO][4355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.316 [INFO][4355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.371 [WARNING][4355] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" HandleID="k8s-pod-network.6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.374 [INFO][4355] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" HandleID="k8s-pod-network.6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.388 [INFO][4355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:42.400417 containerd[1462]: 2025-05-08 05:21:42.394 [INFO][4313] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:42.402493 containerd[1462]: time="2025-05-08T05:21:42.400604340Z" level=info msg="TearDown network for sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\" successfully" May 8 05:21:42.402493 containerd[1462]: time="2025-05-08T05:21:42.400644876Z" level=info msg="StopPodSandbox for \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\" returns successfully" May 8 05:21:42.404791 containerd[1462]: time="2025-05-08T05:21:42.403604155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564bcb876c-d22nw,Uid:1e517a02-62e9-4463-be28-71db2b7410c2,Namespace:calico-apiserver,Attempt:1,}" May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.111 [INFO][4318] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.111 [INFO][4318] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" iface="eth0" netns="/var/run/netns/cni-2c83ae30-4f8d-c44a-6abe-61fbe45f1529" May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.112 [INFO][4318] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" iface="eth0" netns="/var/run/netns/cni-2c83ae30-4f8d-c44a-6abe-61fbe45f1529" May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.113 [INFO][4318] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" iface="eth0" netns="/var/run/netns/cni-2c83ae30-4f8d-c44a-6abe-61fbe45f1529" May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.113 [INFO][4318] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.113 [INFO][4318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.239 [INFO][4341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" HandleID="k8s-pod-network.d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.240 [INFO][4341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.389 [INFO][4341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.415 [WARNING][4341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" HandleID="k8s-pod-network.d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.416 [INFO][4341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" HandleID="k8s-pod-network.d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.426 [INFO][4341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:42.434211 containerd[1462]: 2025-05-08 05:21:42.429 [INFO][4318] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:42.435084 containerd[1462]: time="2025-05-08T05:21:42.434502371Z" level=info msg="TearDown network for sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\" successfully" May 8 05:21:42.435084 containerd[1462]: time="2025-05-08T05:21:42.434535744Z" level=info msg="StopPodSandbox for \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\" returns successfully" May 8 05:21:42.436771 containerd[1462]: time="2025-05-08T05:21:42.436722665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b8776755-76bww,Uid:822354fe-2362-41e4-8ca0-056e0fecd083,Namespace:calico-system,Attempt:1,}" May 8 05:21:42.855299 systemd-networkd[1382]: cali660955f9168: Link UP May 8 05:21:42.861664 systemd-networkd[1382]: cali660955f9168: Gained carrier May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.603 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0 calico-apiserver-564bcb876c- calico-apiserver 1e517a02-62e9-4463-be28-71db2b7410c2 781 0 2025-05-08 05:21:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:564bcb876c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-3-n-e0f469a76e.novalocal calico-apiserver-564bcb876c-d22nw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali660955f9168 [] []}} ContainerID="c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-d22nw" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.603 [INFO][4391] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-d22nw" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.717 [INFO][4428] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" HandleID="k8s-pod-network.c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.738 [INFO][4428] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" HandleID="k8s-pod-network.c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eda20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-3-n-e0f469a76e.novalocal", "pod":"calico-apiserver-564bcb876c-d22nw", "timestamp":"2025-05-08 05:21:42.717231111 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e0f469a76e.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.738 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.739 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.740 [INFO][4428] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e0f469a76e.novalocal' May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.746 [INFO][4428] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.759 [INFO][4428] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.770 [INFO][4428] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.776 [INFO][4428] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.788 [INFO][4428] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.792 [INFO][4428] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.806 [INFO][4428] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412 May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.824 [INFO][4428] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.840 [INFO][4428] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.195/26] block=192.168.65.192/26 handle="k8s-pod-network.c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.841 [INFO][4428] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.195/26] handle="k8s-pod-network.c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.841 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:42.894926 containerd[1462]: 2025-05-08 05:21:42.842 [INFO][4428] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.195/26] IPv6=[] ContainerID="c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" HandleID="k8s-pod-network.c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:42.897050 containerd[1462]: 2025-05-08 05:21:42.845 [INFO][4391] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-d22nw" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0", GenerateName:"calico-apiserver-564bcb876c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e517a02-62e9-4463-be28-71db2b7410c2", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564bcb876c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"", Pod:"calico-apiserver-564bcb876c-d22nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali660955f9168", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:42.897050 containerd[1462]: 2025-05-08 05:21:42.846 [INFO][4391] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.195/32] ContainerID="c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-d22nw" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:42.897050 containerd[1462]: 2025-05-08 05:21:42.846 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali660955f9168 ContainerID="c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-d22nw" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:42.897050 containerd[1462]: 2025-05-08 05:21:42.859 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-d22nw" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:42.897050 containerd[1462]: 2025-05-08 05:21:42.860 [INFO][4391] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-d22nw" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0", GenerateName:"calico-apiserver-564bcb876c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e517a02-62e9-4463-be28-71db2b7410c2", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564bcb876c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412", Pod:"calico-apiserver-564bcb876c-d22nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali660955f9168", MAC:"5a:93:9a:fa:00:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:42.897050 containerd[1462]: 2025-05-08 05:21:42.888 [INFO][4391] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412" Namespace="calico-apiserver" Pod="calico-apiserver-564bcb876c-d22nw" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:42.946838 systemd-networkd[1382]: cali12371a08912: Link UP May 8 05:21:42.947750 systemd-networkd[1382]: cali12371a08912: Gained carrier May 8 05:21:42.981309 containerd[1462]: time="2025-05-08T05:21:42.980826599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:21:42.981778 containerd[1462]: time="2025-05-08T05:21:42.981690018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:21:42.981778 containerd[1462]: time="2025-05-08T05:21:42.981724713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:42.983260 containerd[1462]: time="2025-05-08T05:21:42.983140429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.599 [INFO][4378] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0 coredns-7db6d8ff4d- kube-system 92de57fd-57ee-4cd6-9f52-21d9887e2fee 778 0 2025-05-08 05:21:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-3-n-e0f469a76e.novalocal coredns-7db6d8ff4d-k8p4k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali12371a08912 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k8p4k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.599 [INFO][4378] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k8p4k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.785 [INFO][4424] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" HandleID="k8s-pod-network.474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.818 [INFO][4424] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" HandleID="k8s-pod-network.474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00020b230), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-3-n-e0f469a76e.novalocal", "pod":"coredns-7db6d8ff4d-k8p4k", "timestamp":"2025-05-08 05:21:42.785039777 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e0f469a76e.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.819 [INFO][4424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.841 [INFO][4424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.841 [INFO][4424] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e0f469a76e.novalocal' May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.846 [INFO][4424] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.871 [INFO][4424] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.887 [INFO][4424] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.892 [INFO][4424] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.896 [INFO][4424] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.896 [INFO][4424] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.900 [INFO][4424] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.912 [INFO][4424] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.930 [INFO][4424] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.196/26] block=192.168.65.192/26 handle="k8s-pod-network.474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.930 [INFO][4424] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.196/26] handle="k8s-pod-network.474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.930 [INFO][4424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:42.988548 containerd[1462]: 2025-05-08 05:21:42.930 [INFO][4424] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.196/26] IPv6=[] ContainerID="474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" HandleID="k8s-pod-network.474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:42.991000 containerd[1462]: 2025-05-08 05:21:42.938 [INFO][4378] cni-plugin/k8s.go 386: Populated endpoint ContainerID="474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k8p4k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92de57fd-57ee-4cd6-9f52-21d9887e2fee", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-k8p4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12371a08912", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:42.991000 containerd[1462]: 2025-05-08 05:21:42.938 [INFO][4378] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.196/32] ContainerID="474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k8p4k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:42.991000 containerd[1462]: 2025-05-08 05:21:42.941 [INFO][4378] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12371a08912 ContainerID="474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k8p4k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:42.991000 containerd[1462]: 2025-05-08 05:21:42.948 [INFO][4378] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k8p4k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:42.991000 containerd[1462]: 2025-05-08 05:21:42.949 [INFO][4378] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k8p4k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92de57fd-57ee-4cd6-9f52-21d9887e2fee", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca", Pod:"coredns-7db6d8ff4d-k8p4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12371a08912", MAC:"36:3f:bd:a7:05:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:42.991000 containerd[1462]: 2025-05-08 05:21:42.981 [INFO][4378] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca" Namespace="kube-system" Pod="coredns-7db6d8ff4d-k8p4k" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:43.041566 systemd[1]: Started cri-containerd-c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412.scope - libcontainer container c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412. May 8 05:21:43.086772 systemd[1]: run-netns-cni\x2d2c83ae30\x2d4f8d\x2dc44a\x2d6abe\x2d61fbe45f1529.mount: Deactivated successfully. May 8 05:21:43.087056 systemd[1]: run-netns-cni\x2df6bf2d49\x2d7bcf\x2dda62\x2d933b\x2d354281a76fc6.mount: Deactivated successfully. May 8 05:21:43.103342 systemd-networkd[1382]: cali421fadbb921: Link UP May 8 05:21:43.104510 systemd-networkd[1382]: cali421fadbb921: Gained carrier May 8 05:21:43.145886 containerd[1462]: time="2025-05-08T05:21:43.143143164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:21:43.145886 containerd[1462]: time="2025-05-08T05:21:43.143227101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:21:43.145886 containerd[1462]: time="2025-05-08T05:21:43.143246938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:43.145886 containerd[1462]: time="2025-05-08T05:21:43.143362195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:42.597 [INFO][4368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0 csi-node-driver- calico-system 752f8b41-8238-4fc9-90e8-01e9c2d7826e 780 0 2025-05-08 05:21:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-3-n-e0f469a76e.novalocal csi-node-driver-lgt7d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali421fadbb921 [] []}} ContainerID="a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" Namespace="calico-system" Pod="csi-node-driver-lgt7d" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:42.597 [INFO][4368] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" Namespace="calico-system" Pod="csi-node-driver-lgt7d" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:42.788 [INFO][4427] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" HandleID="k8s-pod-network.a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:42.823 [INFO][4427] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" HandleID="k8s-pod-network.a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000517c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-e0f469a76e.novalocal", "pod":"csi-node-driver-lgt7d", "timestamp":"2025-05-08 05:21:42.788121787 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e0f469a76e.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:42.824 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:42.930 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:42.931 [INFO][4427] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e0f469a76e.novalocal' May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:42.937 [INFO][4427] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:42.949 [INFO][4427] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:42.990 [INFO][4427] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:42.997 [INFO][4427] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:43.002 [INFO][4427] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:43.003 [INFO][4427] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:43.008 [INFO][4427] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1 May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:43.018 [INFO][4427] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:43.040 [INFO][4427] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.197/26] block=192.168.65.192/26 handle="k8s-pod-network.a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:43.040 [INFO][4427] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.197/26] handle="k8s-pod-network.a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:43.040 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:43.146539 containerd[1462]: 2025-05-08 05:21:43.040 [INFO][4427] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.197/26] IPv6=[] ContainerID="a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" HandleID="k8s-pod-network.a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:43.148819 containerd[1462]: 2025-05-08 05:21:43.053 [INFO][4368] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" Namespace="calico-system" Pod="csi-node-driver-lgt7d" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"752f8b41-8238-4fc9-90e8-01e9c2d7826e", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"", Pod:"csi-node-driver-lgt7d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali421fadbb921", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:43.148819 containerd[1462]: 2025-05-08 05:21:43.053 [INFO][4368] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.197/32] ContainerID="a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" Namespace="calico-system" Pod="csi-node-driver-lgt7d" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:43.148819 containerd[1462]: 2025-05-08 05:21:43.054 [INFO][4368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali421fadbb921 ContainerID="a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" Namespace="calico-system" Pod="csi-node-driver-lgt7d" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:43.148819 containerd[1462]: 2025-05-08 05:21:43.106 [INFO][4368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" Namespace="calico-system" Pod="csi-node-driver-lgt7d" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:43.148819 containerd[1462]: 2025-05-08 05:21:43.109 [INFO][4368] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" Namespace="calico-system" Pod="csi-node-driver-lgt7d" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"752f8b41-8238-4fc9-90e8-01e9c2d7826e", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1", Pod:"csi-node-driver-lgt7d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali421fadbb921", MAC:"de:f6:85:37:52:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:43.148819 containerd[1462]: 2025-05-08 05:21:43.137 [INFO][4368] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1" Namespace="calico-system" Pod="csi-node-driver-lgt7d" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:43.180125 systemd-networkd[1382]: calic53b9346862: Link UP May 8 05:21:43.181090 systemd-networkd[1382]: calic53b9346862: Gained carrier May 8 05:21:43.217260 systemd[1]: Started cri-containerd-474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca.scope - libcontainer container 474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca. May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:42.677 [INFO][4405] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0 calico-kube-controllers-59b8776755- calico-system 822354fe-2362-41e4-8ca0-056e0fecd083 779 0 2025-05-08 05:21:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59b8776755 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-3-n-e0f469a76e.novalocal calico-kube-controllers-59b8776755-76bww eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic53b9346862 [] []}} ContainerID="94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" Namespace="calico-system" Pod="calico-kube-controllers-59b8776755-76bww" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:42.677 [INFO][4405] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" Namespace="calico-system" Pod="calico-kube-controllers-59b8776755-76bww" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:42.798 [INFO][4443] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" HandleID="k8s-pod-network.94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:42.830 [INFO][4443] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" HandleID="k8s-pod-network.94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000fa820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-3-n-e0f469a76e.novalocal", "pod":"calico-kube-controllers-59b8776755-76bww", "timestamp":"2025-05-08 05:21:42.798364937 +0000 UTC"}, Hostname:"ci-4081-3-3-n-e0f469a76e.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:42.830 [INFO][4443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.040 [INFO][4443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.041 [INFO][4443] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-3-n-e0f469a76e.novalocal' May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.045 [INFO][4443] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.054 [INFO][4443] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.107 [INFO][4443] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.115 [INFO][4443] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.122 [INFO][4443] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.122 [INFO][4443] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.126 [INFO][4443] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4 May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.134 [INFO][4443] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.152 [INFO][4443] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.198/26] block=192.168.65.192/26 handle="k8s-pod-network.94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.152 [INFO][4443] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.198/26] handle="k8s-pod-network.94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" host="ci-4081-3-3-n-e0f469a76e.novalocal" May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.152 [INFO][4443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:43.222591 containerd[1462]: 2025-05-08 05:21:43.152 [INFO][4443] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.198/26] IPv6=[] ContainerID="94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" HandleID="k8s-pod-network.94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:43.223927 containerd[1462]: 2025-05-08 05:21:43.170 [INFO][4405] cni-plugin/k8s.go 386: Populated endpoint ContainerID="94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" Namespace="calico-system" Pod="calico-kube-controllers-59b8776755-76bww" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0", GenerateName:"calico-kube-controllers-59b8776755-", Namespace:"calico-system", SelfLink:"", UID:"822354fe-2362-41e4-8ca0-056e0fecd083", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b8776755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"", Pod:"calico-kube-controllers-59b8776755-76bww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic53b9346862", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:43.223927 containerd[1462]: 2025-05-08 05:21:43.171 [INFO][4405] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.198/32] ContainerID="94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" Namespace="calico-system" Pod="calico-kube-controllers-59b8776755-76bww" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:43.223927 containerd[1462]: 2025-05-08 05:21:43.171 [INFO][4405] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic53b9346862 ContainerID="94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" Namespace="calico-system" Pod="calico-kube-controllers-59b8776755-76bww" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:43.223927 containerd[1462]: 2025-05-08 05:21:43.177 [INFO][4405] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" Namespace="calico-system" Pod="calico-kube-controllers-59b8776755-76bww" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:43.223927 containerd[1462]: 2025-05-08 05:21:43.184 [INFO][4405] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" Namespace="calico-system" Pod="calico-kube-controllers-59b8776755-76bww" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0", GenerateName:"calico-kube-controllers-59b8776755-", Namespace:"calico-system", SelfLink:"", UID:"822354fe-2362-41e4-8ca0-056e0fecd083", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b8776755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4", Pod:"calico-kube-controllers-59b8776755-76bww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic53b9346862", MAC:"fa:51:0a:cb:d6:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:43.223927 containerd[1462]: 2025-05-08 05:21:43.206 [INFO][4405] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4" Namespace="calico-system" Pod="calico-kube-controllers-59b8776755-76bww" WorkloadEndpoint="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:43.275129 systemd-networkd[1382]: caliab5c6df7d0c: Gained IPv6LL May 8 05:21:43.307246 containerd[1462]: time="2025-05-08T05:21:43.307126213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:21:43.308498 containerd[1462]: time="2025-05-08T05:21:43.307996395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:21:43.308498 containerd[1462]: time="2025-05-08T05:21:43.308045076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:43.308498 containerd[1462]: time="2025-05-08T05:21:43.308383892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:43.343855 containerd[1462]: time="2025-05-08T05:21:43.342310567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 05:21:43.343855 containerd[1462]: time="2025-05-08T05:21:43.342377563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 05:21:43.343855 containerd[1462]: time="2025-05-08T05:21:43.342396538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:43.346004 containerd[1462]: time="2025-05-08T05:21:43.343829486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 05:21:43.356147 systemd[1]: Started cri-containerd-a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1.scope - libcontainer container a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1. May 8 05:21:43.380009 containerd[1462]: time="2025-05-08T05:21:43.379901153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-k8p4k,Uid:92de57fd-57ee-4cd6-9f52-21d9887e2fee,Namespace:kube-system,Attempt:1,} returns sandbox id \"474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca\"" May 8 05:21:43.394855 containerd[1462]: time="2025-05-08T05:21:43.394816426Z" level=info msg="CreateContainer within sandbox \"474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 05:21:43.408186 systemd[1]: Started cri-containerd-94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4.scope - libcontainer container 94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4. May 8 05:21:43.472099 containerd[1462]: time="2025-05-08T05:21:43.472037234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-564bcb876c-d22nw,Uid:1e517a02-62e9-4463-be28-71db2b7410c2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412\"" May 8 05:21:43.481318 containerd[1462]: time="2025-05-08T05:21:43.481197813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgt7d,Uid:752f8b41-8238-4fc9-90e8-01e9c2d7826e,Namespace:calico-system,Attempt:1,} returns sandbox id \"a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1\"" May 8 05:21:43.555419 containerd[1462]: time="2025-05-08T05:21:43.554915911Z" level=info msg="CreateContainer within sandbox \"474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"337afc652556ffacd13370670ea2ec6f8695912ad076d2b57188026e07d2ba6a\"" May 8 05:21:43.573735 containerd[1462]: time="2025-05-08T05:21:43.572640061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b8776755-76bww,Uid:822354fe-2362-41e4-8ca0-056e0fecd083,Namespace:calico-system,Attempt:1,} returns sandbox id \"94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4\"" May 8 05:21:43.583192 containerd[1462]: time="2025-05-08T05:21:43.583140293Z" level=info msg="StartContainer for \"337afc652556ffacd13370670ea2ec6f8695912ad076d2b57188026e07d2ba6a\"" May 8 05:21:43.634191 systemd[1]: Started cri-containerd-337afc652556ffacd13370670ea2ec6f8695912ad076d2b57188026e07d2ba6a.scope - libcontainer container 337afc652556ffacd13370670ea2ec6f8695912ad076d2b57188026e07d2ba6a. May 8 05:21:43.681451 containerd[1462]: time="2025-05-08T05:21:43.681252561Z" level=info msg="StartContainer for \"337afc652556ffacd13370670ea2ec6f8695912ad076d2b57188026e07d2ba6a\" returns successfully" May 8 05:21:44.302222 systemd-networkd[1382]: cali660955f9168: Gained IPv6LL May 8 05:21:44.310377 kubelet[2669]: I0508 05:21:44.309490 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-k8p4k" podStartSLOduration=40.309467943 podStartE2EDuration="40.309467943s" podCreationTimestamp="2025-05-08 05:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 05:21:44.283220739 +0000 UTC m=+55.486251366" watchObservedRunningTime="2025-05-08 05:21:44.309467943 +0000 UTC m=+55.512498560" May 8 05:21:44.364071 systemd-networkd[1382]: cali12371a08912: Gained IPv6LL May 8 05:21:44.683617 systemd-networkd[1382]: cali421fadbb921: Gained IPv6LL May 8 05:21:45.195490 systemd-networkd[1382]: calic53b9346862: Gained IPv6LL May 8 05:21:46.159294 containerd[1462]: time="2025-05-08T05:21:46.159103016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:46.162706 containerd[1462]: time="2025-05-08T05:21:46.161985793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 8 05:21:46.164357 containerd[1462]: time="2025-05-08T05:21:46.164044394Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:46.168120 containerd[1462]: time="2025-05-08T05:21:46.168041239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:46.169028 containerd[1462]: time="2025-05-08T05:21:46.168730482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 5.967018045s" May 8 05:21:46.169028 containerd[1462]: time="2025-05-08T05:21:46.168767361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 05:21:46.175395 containerd[1462]: time="2025-05-08T05:21:46.175119735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 8 05:21:46.181042 containerd[1462]: time="2025-05-08T05:21:46.180243704Z" level=info msg="CreateContainer within sandbox \"a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 05:21:46.208361 containerd[1462]: time="2025-05-08T05:21:46.207098787Z" level=info msg="CreateContainer within sandbox \"a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6c52cd82002695ed417a7a8cb04fec18852bf4d35b3b7454ffa75f61a88449d5\"" May 8 05:21:46.209679 containerd[1462]: time="2025-05-08T05:21:46.208694831Z" level=info msg="StartContainer for \"6c52cd82002695ed417a7a8cb04fec18852bf4d35b3b7454ffa75f61a88449d5\"" May 8 05:21:46.251581 systemd[1]: run-containerd-runc-k8s.io-6c52cd82002695ed417a7a8cb04fec18852bf4d35b3b7454ffa75f61a88449d5-runc.1ZodDH.mount: Deactivated successfully. May 8 05:21:46.264225 systemd[1]: Started cri-containerd-6c52cd82002695ed417a7a8cb04fec18852bf4d35b3b7454ffa75f61a88449d5.scope - libcontainer container 6c52cd82002695ed417a7a8cb04fec18852bf4d35b3b7454ffa75f61a88449d5. May 8 05:21:46.352246 containerd[1462]: time="2025-05-08T05:21:46.352095193Z" level=info msg="StartContainer for \"6c52cd82002695ed417a7a8cb04fec18852bf4d35b3b7454ffa75f61a88449d5\" returns successfully" May 8 05:21:46.700052 containerd[1462]: time="2025-05-08T05:21:46.699994513Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:46.701644 containerd[1462]: time="2025-05-08T05:21:46.701143017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 8 05:21:46.703554 containerd[1462]: time="2025-05-08T05:21:46.703496651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 528.337762ms" May 8 05:21:46.703666 containerd[1462]: time="2025-05-08T05:21:46.703648125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 8 05:21:46.716382 containerd[1462]: time="2025-05-08T05:21:46.716342001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 8 05:21:46.723764 containerd[1462]: time="2025-05-08T05:21:46.723689330Z" level=info msg="CreateContainer within sandbox \"c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 8 05:21:46.747791 containerd[1462]: time="2025-05-08T05:21:46.747740486Z" level=info msg="CreateContainer within sandbox \"c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4a09fffda4f1297027880229363de1b0668c615e242bc469a3dcbf9f6bb7c0ac\"" May 8 05:21:46.748379 containerd[1462]: time="2025-05-08T05:21:46.748353656Z" level=info msg="StartContainer for \"4a09fffda4f1297027880229363de1b0668c615e242bc469a3dcbf9f6bb7c0ac\"" May 8 05:21:46.792746 systemd[1]: Started cri-containerd-4a09fffda4f1297027880229363de1b0668c615e242bc469a3dcbf9f6bb7c0ac.scope - libcontainer container 4a09fffda4f1297027880229363de1b0668c615e242bc469a3dcbf9f6bb7c0ac. May 8 05:21:46.886868 containerd[1462]: time="2025-05-08T05:21:46.886309167Z" level=info msg="StartContainer for \"4a09fffda4f1297027880229363de1b0668c615e242bc469a3dcbf9f6bb7c0ac\" returns successfully" May 8 05:21:47.363538 kubelet[2669]: I0508 05:21:47.363268 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-564bcb876c-d22nw" podStartSLOduration=34.123402394 podStartE2EDuration="37.363243491s" podCreationTimestamp="2025-05-08 05:21:10 +0000 UTC" firstStartedPulling="2025-05-08 05:21:43.475543128 +0000 UTC m=+54.678573755" lastFinishedPulling="2025-05-08 05:21:46.715384235 +0000 UTC m=+57.918414852" observedRunningTime="2025-05-08 05:21:47.32939995 +0000 UTC m=+58.532430577" watchObservedRunningTime="2025-05-08 05:21:47.363243491 +0000 UTC m=+58.566274118" May 8 05:21:47.363538 kubelet[2669]: I0508 05:21:47.363379 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-564bcb876c-l5rx9" podStartSLOduration=31.390214292 podStartE2EDuration="37.363373565s" podCreationTimestamp="2025-05-08 05:21:10 +0000 UTC" firstStartedPulling="2025-05-08 05:21:40.20134663 +0000 UTC m=+51.404377247" lastFinishedPulling="2025-05-08 05:21:46.174505893 +0000 UTC m=+57.377536520" observedRunningTime="2025-05-08 05:21:47.363134907 +0000 UTC m=+58.566165534" watchObservedRunningTime="2025-05-08 05:21:47.363373565 +0000 UTC m=+58.566404192" May 8 05:21:48.306140 kubelet[2669]: I0508 05:21:48.305465 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:21:48.306140 kubelet[2669]: I0508 05:21:48.305506 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:21:48.909322 containerd[1462]: time="2025-05-08T05:21:48.907967958Z" level=info msg="StopPodSandbox for \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\"" May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:48.985 [WARNING][4854] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0", GenerateName:"calico-kube-controllers-59b8776755-", Namespace:"calico-system", SelfLink:"", UID:"822354fe-2362-41e4-8ca0-056e0fecd083", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b8776755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4", Pod:"calico-kube-controllers-59b8776755-76bww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic53b9346862", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:48.986 [INFO][4854] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:48.986 [INFO][4854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" iface="eth0" netns="" May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:48.986 [INFO][4854] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:48.986 [INFO][4854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:49.048 [INFO][4863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" HandleID="k8s-pod-network.d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:49.048 [INFO][4863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:49.048 [INFO][4863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:49.065 [WARNING][4863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" HandleID="k8s-pod-network.d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:49.065 [INFO][4863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" HandleID="k8s-pod-network.d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:49.068 [INFO][4863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:49.079337 containerd[1462]: 2025-05-08 05:21:49.074 [INFO][4854] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:49.079337 containerd[1462]: time="2025-05-08T05:21:49.077818358Z" level=info msg="TearDown network for sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\" successfully" May 8 05:21:49.079337 containerd[1462]: time="2025-05-08T05:21:49.077844837Z" level=info msg="StopPodSandbox for \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\" returns successfully" May 8 05:21:49.079964 containerd[1462]: time="2025-05-08T05:21:49.079580523Z" level=info msg="RemovePodSandbox for \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\"" May 8 05:21:49.079964 containerd[1462]: time="2025-05-08T05:21:49.079607944Z" level=info msg="Forcibly stopping sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\"" May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.147 [WARNING][4881] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0", GenerateName:"calico-kube-controllers-59b8776755-", Namespace:"calico-system", SelfLink:"", UID:"822354fe-2362-41e4-8ca0-056e0fecd083", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b8776755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4", Pod:"calico-kube-controllers-59b8776755-76bww", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic53b9346862", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.147 [INFO][4881] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.147 [INFO][4881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" iface="eth0" netns="" May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.147 [INFO][4881] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.147 [INFO][4881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.204 [INFO][4888] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" HandleID="k8s-pod-network.d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.204 [INFO][4888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.205 [INFO][4888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.215 [WARNING][4888] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" HandleID="k8s-pod-network.d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.215 [INFO][4888] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" HandleID="k8s-pod-network.d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--kube--controllers--59b8776755--76bww-eth0" May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.217 [INFO][4888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:49.221464 containerd[1462]: 2025-05-08 05:21:49.219 [INFO][4881] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606" May 8 05:21:49.221963 containerd[1462]: time="2025-05-08T05:21:49.221530077Z" level=info msg="TearDown network for sandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\" successfully" May 8 05:21:49.401472 containerd[1462]: time="2025-05-08T05:21:49.401144738Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:21:49.401472 containerd[1462]: time="2025-05-08T05:21:49.401249114Z" level=info msg="RemovePodSandbox \"d74c3669f13b6a8d7608e430dbf230e1d7966bfa8676e1d33495feec1f818606\" returns successfully" May 8 05:21:49.403149 containerd[1462]: time="2025-05-08T05:21:49.403065952Z" level=info msg="StopPodSandbox for \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\"" May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.487 [WARNING][4914] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0", GenerateName:"calico-apiserver-564bcb876c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ada9b516-fc2d-475c-96a6-11055bc82985", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564bcb876c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5", Pod:"calico-apiserver-564bcb876c-l5rx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c9dbacfc1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.487 [INFO][4914] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.487 [INFO][4914] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" iface="eth0" netns="" May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.487 [INFO][4914] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.487 [INFO][4914] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.533 [INFO][4922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" HandleID="k8s-pod-network.62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.534 [INFO][4922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.535 [INFO][4922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.549 [WARNING][4922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" HandleID="k8s-pod-network.62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.550 [INFO][4922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" HandleID="k8s-pod-network.62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.554 [INFO][4922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:49.560389 containerd[1462]: 2025-05-08 05:21:49.557 [INFO][4914] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:49.560389 containerd[1462]: time="2025-05-08T05:21:49.560352780Z" level=info msg="TearDown network for sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\" successfully" May 8 05:21:49.560389 containerd[1462]: time="2025-05-08T05:21:49.560379470Z" level=info msg="StopPodSandbox for \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\" returns successfully" May 8 05:21:49.561498 containerd[1462]: time="2025-05-08T05:21:49.561139165Z" level=info msg="RemovePodSandbox for \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\"" May 8 05:21:49.561498 containerd[1462]: time="2025-05-08T05:21:49.561166476Z" level=info msg="Forcibly stopping sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\"" May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.664 [WARNING][4941] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0", GenerateName:"calico-apiserver-564bcb876c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ada9b516-fc2d-475c-96a6-11055bc82985", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564bcb876c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"a20085e18ca785ee6cc4bb1db100d4bbeee586dae0159765313082a86b26a5c5", Pod:"calico-apiserver-564bcb876c-l5rx9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c9dbacfc1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.665 [INFO][4941] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.665 [INFO][4941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" iface="eth0" netns="" May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.665 [INFO][4941] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.665 [INFO][4941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.725 [INFO][4948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" HandleID="k8s-pod-network.62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.725 [INFO][4948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.725 [INFO][4948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.740 [WARNING][4948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" HandleID="k8s-pod-network.62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.740 [INFO][4948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" HandleID="k8s-pod-network.62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--l5rx9-eth0" May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.746 [INFO][4948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:49.753162 containerd[1462]: 2025-05-08 05:21:49.749 [INFO][4941] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e" May 8 05:21:49.755255 containerd[1462]: time="2025-05-08T05:21:49.753416385Z" level=info msg="TearDown network for sandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\" successfully" May 8 05:21:49.765299 containerd[1462]: time="2025-05-08T05:21:49.764790175Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:21:49.765299 containerd[1462]: time="2025-05-08T05:21:49.764880254Z" level=info msg="RemovePodSandbox \"62c7943da045269d6206637c199308e5839f01bbe5b346e14bcfb8abb8f8036e\" returns successfully" May 8 05:21:49.765671 containerd[1462]: time="2025-05-08T05:21:49.765650369Z" level=info msg="StopPodSandbox for \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\"" May 8 05:21:49.769186 containerd[1462]: time="2025-05-08T05:21:49.769107251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:49.770417 containerd[1462]: time="2025-05-08T05:21:49.770265613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 8 05:21:49.774490 containerd[1462]: time="2025-05-08T05:21:49.773528613Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:49.779442 containerd[1462]: time="2025-05-08T05:21:49.779046732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:49.780904 containerd[1462]: time="2025-05-08T05:21:49.780854893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 3.064470883s" May 8 05:21:49.781077 containerd[1462]: time="2025-05-08T05:21:49.781022287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 8 05:21:49.785334 containerd[1462]: time="2025-05-08T05:21:49.784053522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 8 05:21:49.794012 containerd[1462]: time="2025-05-08T05:21:49.791967223Z" level=info msg="CreateContainer within sandbox \"a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 8 05:21:49.819414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945455894.mount: Deactivated successfully. May 8 05:21:49.831241 containerd[1462]: time="2025-05-08T05:21:49.831067542Z" level=info msg="CreateContainer within sandbox \"a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fd0754ced8428009b86f029a94edc8c69692fa1f76a9455f8d37ec3f9bcf0c90\"" May 8 05:21:49.832610 containerd[1462]: time="2025-05-08T05:21:49.832039596Z" level=info msg="StartContainer for \"fd0754ced8428009b86f029a94edc8c69692fa1f76a9455f8d37ec3f9bcf0c90\"" May 8 05:21:49.905129 systemd[1]: Started cri-containerd-fd0754ced8428009b86f029a94edc8c69692fa1f76a9455f8d37ec3f9bcf0c90.scope - libcontainer container fd0754ced8428009b86f029a94edc8c69692fa1f76a9455f8d37ec3f9bcf0c90. May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.872 [WARNING][4966] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"752f8b41-8238-4fc9-90e8-01e9c2d7826e", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1", Pod:"csi-node-driver-lgt7d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali421fadbb921", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.874 [INFO][4966] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.874 [INFO][4966] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" iface="eth0" netns="" May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.874 [INFO][4966] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.874 [INFO][4966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.932 [INFO][4989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" HandleID="k8s-pod-network.7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.932 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.932 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.943 [WARNING][4989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" HandleID="k8s-pod-network.7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.943 [INFO][4989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" HandleID="k8s-pod-network.7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.945 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:49.949962 containerd[1462]: 2025-05-08 05:21:49.947 [INFO][4966] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:49.949962 containerd[1462]: time="2025-05-08T05:21:49.949769628Z" level=info msg="TearDown network for sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\" successfully" May 8 05:21:49.949962 containerd[1462]: time="2025-05-08T05:21:49.949815955Z" level=info msg="StopPodSandbox for \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\" returns successfully" May 8 05:21:49.950864 containerd[1462]: time="2025-05-08T05:21:49.950606388Z" level=info msg="RemovePodSandbox for \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\"" May 8 05:21:49.950864 containerd[1462]: time="2025-05-08T05:21:49.950656201Z" level=info msg="Forcibly stopping sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\"" May 8 05:21:49.992491 containerd[1462]: time="2025-05-08T05:21:49.989309612Z" level=info msg="StartContainer for \"fd0754ced8428009b86f029a94edc8c69692fa1f76a9455f8d37ec3f9bcf0c90\" returns successfully" May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.023 [WARNING][5017] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"752f8b41-8238-4fc9-90e8-01e9c2d7826e", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1", Pod:"csi-node-driver-lgt7d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali421fadbb921", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.024 [INFO][5017] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.024 [INFO][5017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" iface="eth0" netns="" May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.024 [INFO][5017] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.024 [INFO][5017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.061 [INFO][5033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" HandleID="k8s-pod-network.7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.062 [INFO][5033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.062 [INFO][5033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.071 [WARNING][5033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" HandleID="k8s-pod-network.7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.071 [INFO][5033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" HandleID="k8s-pod-network.7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-csi--node--driver--lgt7d-eth0" May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.076 [INFO][5033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:50.079857 containerd[1462]: 2025-05-08 05:21:50.077 [INFO][5017] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb" May 8 05:21:50.079857 containerd[1462]: time="2025-05-08T05:21:50.079110106Z" level=info msg="TearDown network for sandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\" successfully" May 8 05:21:50.085249 containerd[1462]: time="2025-05-08T05:21:50.085067869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:21:50.085249 containerd[1462]: time="2025-05-08T05:21:50.085133282Z" level=info msg="RemovePodSandbox \"7ab7ee224fe679d7cca517b630cb3a2545a234979e82c558fd2b7c18dd966fdb\" returns successfully" May 8 05:21:50.085863 containerd[1462]: time="2025-05-08T05:21:50.085583145Z" level=info msg="StopPodSandbox for \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\"" May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.136 [WARNING][5052] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4957b03e-c51d-4c66-82a2-0815615ec4ed", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184", Pod:"coredns-7db6d8ff4d-5mj5k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab5c6df7d0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.137 [INFO][5052] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.137 [INFO][5052] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" iface="eth0" netns="" May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.137 [INFO][5052] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.137 [INFO][5052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.163 [INFO][5059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" HandleID="k8s-pod-network.fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.163 [INFO][5059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.163 [INFO][5059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.172 [WARNING][5059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" HandleID="k8s-pod-network.fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.172 [INFO][5059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" HandleID="k8s-pod-network.fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.176 [INFO][5059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:50.179804 containerd[1462]: 2025-05-08 05:21:50.177 [INFO][5052] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:50.180408 containerd[1462]: time="2025-05-08T05:21:50.179835308Z" level=info msg="TearDown network for sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\" successfully" May 8 05:21:50.180408 containerd[1462]: time="2025-05-08T05:21:50.179862870Z" level=info msg="StopPodSandbox for \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\" returns successfully" May 8 05:21:50.181578 containerd[1462]: time="2025-05-08T05:21:50.181030881Z" level=info msg="RemovePodSandbox for \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\"" May 8 05:21:50.181578 containerd[1462]: time="2025-05-08T05:21:50.181081354Z" level=info msg="Forcibly stopping sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\"" May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.228 [WARNING][5077] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4957b03e-c51d-4c66-82a2-0815615ec4ed", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"35f816d82d878fbab28696a7fd92720201118bec5ddc1498d21495eec772a184", Pod:"coredns-7db6d8ff4d-5mj5k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab5c6df7d0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.229 [INFO][5077] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.229 [INFO][5077] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" iface="eth0" netns="" May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.229 [INFO][5077] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.229 [INFO][5077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.253 [INFO][5085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" HandleID="k8s-pod-network.fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.253 [INFO][5085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.253 [INFO][5085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.263 [WARNING][5085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" HandleID="k8s-pod-network.fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.263 [INFO][5085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" HandleID="k8s-pod-network.fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--5mj5k-eth0" May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.268 [INFO][5085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:50.273024 containerd[1462]: 2025-05-08 05:21:50.269 [INFO][5077] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113" May 8 05:21:50.273024 containerd[1462]: time="2025-05-08T05:21:50.272384246Z" level=info msg="TearDown network for sandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\" successfully" May 8 05:21:50.279538 containerd[1462]: time="2025-05-08T05:21:50.279474153Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:21:50.279657 containerd[1462]: time="2025-05-08T05:21:50.279589279Z" level=info msg="RemovePodSandbox \"fe16484754c766f3f1dbd97dd228d1762f4b56784a01821a1dd871cc163f4113\" returns successfully" May 8 05:21:50.280542 containerd[1462]: time="2025-05-08T05:21:50.280313056Z" level=info msg="StopPodSandbox for \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\"" May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.333 [WARNING][5104] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92de57fd-57ee-4cd6-9f52-21d9887e2fee", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca", Pod:"coredns-7db6d8ff4d-k8p4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12371a08912", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.333 [INFO][5104] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.333 [INFO][5104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" iface="eth0" netns="" May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.334 [INFO][5104] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.334 [INFO][5104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.355 [INFO][5111] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" HandleID="k8s-pod-network.8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.355 [INFO][5111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.356 [INFO][5111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.363 [WARNING][5111] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" HandleID="k8s-pod-network.8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.363 [INFO][5111] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" HandleID="k8s-pod-network.8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.367 [INFO][5111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:50.371595 containerd[1462]: 2025-05-08 05:21:50.369 [INFO][5104] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:50.371595 containerd[1462]: time="2025-05-08T05:21:50.371162336Z" level=info msg="TearDown network for sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\" successfully" May 8 05:21:50.371595 containerd[1462]: time="2025-05-08T05:21:50.371294284Z" level=info msg="StopPodSandbox for \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\" returns successfully" May 8 05:21:50.374988 containerd[1462]: time="2025-05-08T05:21:50.372293127Z" level=info msg="RemovePodSandbox for \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\"" May 8 05:21:50.374988 containerd[1462]: time="2025-05-08T05:21:50.372456123Z" level=info msg="Forcibly stopping sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\"" May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.427 [WARNING][5129] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"92de57fd-57ee-4cd6-9f52-21d9887e2fee", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"474bb2dc8c75c56a75e30508da12777b51edf1b164034d73bc9073378c6a72ca", Pod:"coredns-7db6d8ff4d-k8p4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12371a08912", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.427 [INFO][5129] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.427 [INFO][5129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" iface="eth0" netns="" May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.427 [INFO][5129] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.427 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.455 [INFO][5137] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" HandleID="k8s-pod-network.8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.455 [INFO][5137] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.455 [INFO][5137] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.462 [WARNING][5137] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" HandleID="k8s-pod-network.8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.462 [INFO][5137] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" HandleID="k8s-pod-network.8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-coredns--7db6d8ff4d--k8p4k-eth0" May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.463 [INFO][5137] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:50.466994 containerd[1462]: 2025-05-08 05:21:50.464 [INFO][5129] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272" May 8 05:21:50.466994 containerd[1462]: time="2025-05-08T05:21:50.466047136Z" level=info msg="TearDown network for sandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\" successfully" May 8 05:21:50.472143 containerd[1462]: time="2025-05-08T05:21:50.472071012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:21:50.472510 containerd[1462]: time="2025-05-08T05:21:50.472489357Z" level=info msg="RemovePodSandbox \"8b5eb0fb77bf73733722377519c99ef77fd03c1144a39dd8150352fad930c272\" returns successfully" May 8 05:21:50.473747 containerd[1462]: time="2025-05-08T05:21:50.473532353Z" level=info msg="StopPodSandbox for \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\"" May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.517 [WARNING][5155] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0", GenerateName:"calico-apiserver-564bcb876c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e517a02-62e9-4463-be28-71db2b7410c2", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564bcb876c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412", Pod:"calico-apiserver-564bcb876c-d22nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali660955f9168", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.517 [INFO][5155] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.517 [INFO][5155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" iface="eth0" netns="" May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.517 [INFO][5155] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.517 [INFO][5155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.551 [INFO][5162] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" HandleID="k8s-pod-network.6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.552 [INFO][5162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.552 [INFO][5162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.558 [WARNING][5162] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" HandleID="k8s-pod-network.6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.558 [INFO][5162] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" HandleID="k8s-pod-network.6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.560 [INFO][5162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:50.562885 containerd[1462]: 2025-05-08 05:21:50.561 [INFO][5155] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:50.563415 containerd[1462]: time="2025-05-08T05:21:50.562928639Z" level=info msg="TearDown network for sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\" successfully" May 8 05:21:50.563415 containerd[1462]: time="2025-05-08T05:21:50.562956050Z" level=info msg="StopPodSandbox for \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\" returns successfully" May 8 05:21:50.563472 containerd[1462]: time="2025-05-08T05:21:50.563412116Z" level=info msg="RemovePodSandbox for \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\"" May 8 05:21:50.563472 containerd[1462]: time="2025-05-08T05:21:50.563437884Z" level=info msg="Forcibly stopping sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\"" May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.621 [WARNING][5180] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0", GenerateName:"calico-apiserver-564bcb876c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e517a02-62e9-4463-be28-71db2b7410c2", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.May, 8, 5, 21, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"564bcb876c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-3-n-e0f469a76e.novalocal", ContainerID:"c4fe5e5383b3e47440ab0aaf7bb6ac7e5b5fabaa3a6d19e74e1681cf967f3412", Pod:"calico-apiserver-564bcb876c-d22nw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali660955f9168", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.621 [INFO][5180] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.621 [INFO][5180] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" iface="eth0" netns="" May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.621 [INFO][5180] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.621 [INFO][5180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.669 [INFO][5187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" HandleID="k8s-pod-network.6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.669 [INFO][5187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.669 [INFO][5187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.676 [WARNING][5187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" HandleID="k8s-pod-network.6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.676 [INFO][5187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" HandleID="k8s-pod-network.6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" Workload="ci--4081--3--3--n--e0f469a76e.novalocal-k8s-calico--apiserver--564bcb876c--d22nw-eth0" May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.678 [INFO][5187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 8 05:21:50.681934 containerd[1462]: 2025-05-08 05:21:50.680 [INFO][5180] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280" May 8 05:21:50.681934 containerd[1462]: time="2025-05-08T05:21:50.681895132Z" level=info msg="TearDown network for sandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\" successfully" May 8 05:21:50.686053 containerd[1462]: time="2025-05-08T05:21:50.685961747Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 05:21:50.686110 containerd[1462]: time="2025-05-08T05:21:50.686066203Z" level=info msg="RemovePodSandbox \"6f464e781526b94105c2dc67f800d4d93a02b1e065ca306c5675dc276b7f0280\" returns successfully" May 8 05:21:53.123134 containerd[1462]: time="2025-05-08T05:21:53.122332619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:53.124827 containerd[1462]: time="2025-05-08T05:21:53.124796733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 8 05:21:53.126236 containerd[1462]: time="2025-05-08T05:21:53.126188345Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:53.128958 containerd[1462]: time="2025-05-08T05:21:53.128925723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:53.129905 containerd[1462]: time="2025-05-08T05:21:53.129870726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.345783872s" May 8 05:21:53.129957 containerd[1462]: time="2025-05-08T05:21:53.129907255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 8 05:21:53.131715 containerd[1462]: time="2025-05-08T05:21:53.131591717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 8 05:21:53.145423 containerd[1462]: time="2025-05-08T05:21:53.145380717Z" level=info msg="CreateContainer within sandbox \"94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 8 05:21:53.171431 containerd[1462]: time="2025-05-08T05:21:53.171304423Z" level=info msg="CreateContainer within sandbox \"94561963fb7d58e71aa7d1382db731460ea6ebbfececd32867a8e957bf1db3f4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e3d81f58dc6868a08c987f24b83b710d6bf73352cb31c4d352e59cf2c6b28d0a\"" May 8 05:21:53.172983 containerd[1462]: time="2025-05-08T05:21:53.172144189Z" level=info msg="StartContainer for \"e3d81f58dc6868a08c987f24b83b710d6bf73352cb31c4d352e59cf2c6b28d0a\"" May 8 05:21:53.205125 systemd[1]: Started cri-containerd-e3d81f58dc6868a08c987f24b83b710d6bf73352cb31c4d352e59cf2c6b28d0a.scope - libcontainer container e3d81f58dc6868a08c987f24b83b710d6bf73352cb31c4d352e59cf2c6b28d0a. May 8 05:21:53.248572 containerd[1462]: time="2025-05-08T05:21:53.248452892Z" level=info msg="StartContainer for \"e3d81f58dc6868a08c987f24b83b710d6bf73352cb31c4d352e59cf2c6b28d0a\" returns successfully" May 8 05:21:53.444925 kubelet[2669]: I0508 05:21:53.444861 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59b8776755-76bww" podStartSLOduration=33.888832916 podStartE2EDuration="43.444842002s" podCreationTimestamp="2025-05-08 05:21:10 +0000 UTC" firstStartedPulling="2025-05-08 05:21:43.574850076 +0000 UTC m=+54.777880693" lastFinishedPulling="2025-05-08 05:21:53.130859162 +0000 UTC m=+64.333889779" observedRunningTime="2025-05-08 05:21:53.404491108 +0000 UTC m=+64.607521735" watchObservedRunningTime="2025-05-08 05:21:53.444842002 +0000 UTC m=+64.647872619" May 8 05:21:55.756964 containerd[1462]: time="2025-05-08T05:21:55.756901392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:55.758480 containerd[1462]: time="2025-05-08T05:21:55.758350007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 8 05:21:55.760397 containerd[1462]: time="2025-05-08T05:21:55.759988971Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:55.762871 containerd[1462]: time="2025-05-08T05:21:55.762840657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 05:21:55.763543 containerd[1462]: time="2025-05-08T05:21:55.763501855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.631865374s" May 8 05:21:55.763596 containerd[1462]: time="2025-05-08T05:21:55.763535899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 8 05:21:55.767641 containerd[1462]: time="2025-05-08T05:21:55.767608710Z" level=info msg="CreateContainer within sandbox \"a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 8 05:21:55.798106 containerd[1462]: time="2025-05-08T05:21:55.798064818Z" level=info msg="CreateContainer within sandbox \"a9954fc59aee3980ad82b11784efa20076d2ce7d8c8e410190253e59fe13ada1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6d762f5292666cb117e5b03ba7c4d3233eb3582f50265188a2e63594a5b470f0\"" May 8 05:21:55.798733 containerd[1462]: time="2025-05-08T05:21:55.798706168Z" level=info msg="StartContainer for \"6d762f5292666cb117e5b03ba7c4d3233eb3582f50265188a2e63594a5b470f0\"" May 8 05:21:55.799896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567877152.mount: Deactivated successfully. May 8 05:21:55.853161 systemd[1]: Started cri-containerd-6d762f5292666cb117e5b03ba7c4d3233eb3582f50265188a2e63594a5b470f0.scope - libcontainer container 6d762f5292666cb117e5b03ba7c4d3233eb3582f50265188a2e63594a5b470f0. May 8 05:21:55.896678 containerd[1462]: time="2025-05-08T05:21:55.896105179Z" level=info msg="StartContainer for \"6d762f5292666cb117e5b03ba7c4d3233eb3582f50265188a2e63594a5b470f0\" returns successfully" May 8 05:21:56.057027 kubelet[2669]: I0508 05:21:56.056170 2669 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 8 05:21:56.057027 kubelet[2669]: I0508 05:21:56.056209 2669 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 8 05:21:56.801858 systemd[1]: run-containerd-runc-k8s.io-e3d81f58dc6868a08c987f24b83b710d6bf73352cb31c4d352e59cf2c6b28d0a-runc.qYFz6J.mount: Deactivated successfully. May 8 05:22:02.714778 kernel: hrtimer: interrupt took 2475148 ns May 8 05:22:03.577842 kubelet[2669]: I0508 05:22:03.577621 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:22:03.645400 kubelet[2669]: I0508 05:22:03.645236 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lgt7d" podStartSLOduration=41.364867663 podStartE2EDuration="53.64521705s" podCreationTimestamp="2025-05-08 05:21:10 +0000 UTC" firstStartedPulling="2025-05-08 05:21:43.484773158 +0000 UTC m=+54.687803785" lastFinishedPulling="2025-05-08 05:21:55.765122555 +0000 UTC m=+66.968153172" observedRunningTime="2025-05-08 05:21:56.402774632 +0000 UTC m=+67.605805269" watchObservedRunningTime="2025-05-08 05:22:03.64521705 +0000 UTC m=+74.848247667" May 8 05:22:06.643119 kubelet[2669]: I0508 05:22:06.642360 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 05:22:31.258118 update_engine[1446]: I20250508 05:22:31.256778 1446 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 8 05:22:31.258118 update_engine[1446]: I20250508 05:22:31.257391 1446 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 8 05:22:31.265135 update_engine[1446]: I20250508 05:22:31.259440 1446 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 8 05:22:31.265135 update_engine[1446]: I20250508 05:22:31.264091 1446 omaha_request_params.cc:62] Current group set to lts May 8 05:22:31.275073 update_engine[1446]: I20250508 05:22:31.274281 1446 update_attempter.cc:499] Already updated boot flags. Skipping. May 8 05:22:31.275073 update_engine[1446]: I20250508 05:22:31.274360 1446 update_attempter.cc:643] Scheduling an action processor start. May 8 05:22:31.275073 update_engine[1446]: I20250508 05:22:31.274444 1446 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 05:22:31.275073 update_engine[1446]: I20250508 05:22:31.274701 1446 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 8 05:22:31.275073 update_engine[1446]: I20250508 05:22:31.274916 1446 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 05:22:31.280078 update_engine[1446]: I20250508 05:22:31.274959 1446 omaha_request_action.cc:272] Request: May 8 05:22:31.280078 update_engine[1446]: May 8 05:22:31.280078 update_engine[1446]: May 8 05:22:31.280078 update_engine[1446]: May 8 05:22:31.280078 update_engine[1446]: May 8 05:22:31.280078 update_engine[1446]: May 8 05:22:31.280078 update_engine[1446]: May 8 05:22:31.280078 update_engine[1446]: May 8 05:22:31.280078 update_engine[1446]: May 8 05:22:31.280078 update_engine[1446]: I20250508 05:22:31.277096 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 05:22:31.288908 locksmithd[1479]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 8 05:22:31.292026 update_engine[1446]: I20250508 05:22:31.291905 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 05:22:31.293546 update_engine[1446]: I20250508 05:22:31.293423 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 05:22:31.307026 update_engine[1446]: E20250508 05:22:31.306870 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 05:22:31.307191 update_engine[1446]: I20250508 05:22:31.307127 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 8 05:22:41.227762 update_engine[1446]: I20250508 05:22:41.227526 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 05:22:41.229093 update_engine[1446]: I20250508 05:22:41.228294 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 05:22:41.229093 update_engine[1446]: I20250508 05:22:41.228924 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 05:22:41.240214 update_engine[1446]: E20250508 05:22:41.240103 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 05:22:41.240448 update_engine[1446]: I20250508 05:22:41.240227 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 8 05:22:51.229638 update_engine[1446]: I20250508 05:22:51.227843 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 05:22:51.229638 update_engine[1446]: I20250508 05:22:51.229559 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 05:22:51.232908 update_engine[1446]: I20250508 05:22:51.230700 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 05:22:51.241523 update_engine[1446]: E20250508 05:22:51.241412 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 05:22:51.241797 update_engine[1446]: I20250508 05:22:51.241585 1446 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 8 05:23:01.228939 update_engine[1446]: I20250508 05:23:01.228765 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 05:23:01.230253 update_engine[1446]: I20250508 05:23:01.229316 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 05:23:01.230253 update_engine[1446]: I20250508 05:23:01.229831 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 05:23:01.240217 update_engine[1446]: E20250508 05:23:01.240056 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 05:23:01.240217 update_engine[1446]: I20250508 05:23:01.240200 1446 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 05:23:01.240652 update_engine[1446]: I20250508 05:23:01.240258 1446 omaha_request_action.cc:617] Omaha request response: May 8 05:23:01.240746 update_engine[1446]: E20250508 05:23:01.240706 1446 omaha_request_action.cc:636] Omaha request network transfer failed. May 8 05:23:01.241395 update_engine[1446]: I20250508 05:23:01.241297 1446 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 8 05:23:01.241395 update_engine[1446]: I20250508 05:23:01.241343 1446 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 05:23:01.241395 update_engine[1446]: I20250508 05:23:01.241357 1446 update_attempter.cc:306] Processing Done. May 8 05:23:01.241767 update_engine[1446]: E20250508 05:23:01.241467 1446 update_attempter.cc:619] Update failed. May 8 05:23:01.241767 update_engine[1446]: I20250508 05:23:01.241499 1446 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 8 05:23:01.241767 update_engine[1446]: I20250508 05:23:01.241513 1446 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 8 05:23:01.241767 update_engine[1446]: I20250508 05:23:01.241528 1446 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 8 05:23:01.242676 update_engine[1446]: I20250508 05:23:01.241963 1446 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 8 05:23:01.242676 update_engine[1446]: I20250508 05:23:01.242159 1446 omaha_request_action.cc:271] Posting an Omaha request to disabled May 8 05:23:01.242676 update_engine[1446]: I20250508 05:23:01.242180 1446 omaha_request_action.cc:272] Request: May 8 05:23:01.242676 update_engine[1446]: May 8 05:23:01.242676 update_engine[1446]: May 8 05:23:01.242676 update_engine[1446]: May 8 05:23:01.242676 update_engine[1446]: May 8 05:23:01.242676 update_engine[1446]: May 8 05:23:01.242676 update_engine[1446]: May 8 05:23:01.242676 update_engine[1446]: I20250508 05:23:01.242194 1446 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 8 05:23:01.242676 update_engine[1446]: I20250508 05:23:01.242477 1446 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 8 05:23:01.244177 update_engine[1446]: I20250508 05:23:01.242894 1446 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 8 05:23:01.245799 locksmithd[1479]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 8 05:23:01.253252 update_engine[1446]: E20250508 05:23:01.253147 1446 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 8 05:23:01.253471 update_engine[1446]: I20250508 05:23:01.253266 1446 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 8 05:23:01.253471 update_engine[1446]: I20250508 05:23:01.253290 1446 omaha_request_action.cc:617] Omaha request response: May 8 05:23:01.253471 update_engine[1446]: I20250508 05:23:01.253305 1446 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 05:23:01.253471 update_engine[1446]: I20250508 05:23:01.253319 1446 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 8 05:23:01.253471 update_engine[1446]: I20250508 05:23:01.253330 1446 update_attempter.cc:306] Processing Done. May 8 05:23:01.253471 update_engine[1446]: I20250508 05:23:01.253345 1446 update_attempter.cc:310] Error event sent. May 8 05:23:01.253471 update_engine[1446]: I20250508 05:23:01.253382 1446 update_check_scheduler.cc:74] Next update check in 43m6s May 8 05:23:01.255031 locksmithd[1479]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 8 05:23:56.835440 systemd[1]: run-containerd-runc-k8s.io-e3d81f58dc6868a08c987f24b83b710d6bf73352cb31c4d352e59cf2c6b28d0a-runc.Pni9Zt.mount: Deactivated successfully. May 8 05:23:57.128757 systemd[1]: Started sshd@9-172.24.4.234:22-172.24.4.1:39012.service - OpenSSH per-connection server daemon (172.24.4.1:39012). May 8 05:23:58.528144 sshd[5574]: Accepted publickey for core from 172.24.4.1 port 39012 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:23:58.533879 sshd[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:23:58.554375 systemd-logind[1443]: New session 12 of user core. May 8 05:23:58.562444 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 05:23:59.310908 sshd[5574]: pam_unix(sshd:session): session closed for user core May 8 05:23:59.322140 systemd[1]: sshd@9-172.24.4.234:22-172.24.4.1:39012.service: Deactivated successfully. May 8 05:23:59.329685 systemd[1]: session-12.scope: Deactivated successfully. May 8 05:23:59.331926 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. May 8 05:23:59.335181 systemd-logind[1443]: Removed session 12. May 8 05:24:04.337506 systemd[1]: Started sshd@10-172.24.4.234:22-172.24.4.1:46928.service - OpenSSH per-connection server daemon (172.24.4.1:46928). May 8 05:24:05.593404 sshd[5607]: Accepted publickey for core from 172.24.4.1 port 46928 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:05.597705 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:05.610195 systemd-logind[1443]: New session 13 of user core. May 8 05:24:05.618391 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 05:24:06.355836 sshd[5607]: pam_unix(sshd:session): session closed for user core May 8 05:24:06.362263 systemd[1]: sshd@10-172.24.4.234:22-172.24.4.1:46928.service: Deactivated successfully. May 8 05:24:06.365772 systemd[1]: session-13.scope: Deactivated successfully. May 8 05:24:06.369807 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. May 8 05:24:06.371193 systemd-logind[1443]: Removed session 13. May 8 05:24:11.384536 systemd[1]: Started sshd@11-172.24.4.234:22-172.24.4.1:46940.service - OpenSSH per-connection server daemon (172.24.4.1:46940). May 8 05:24:12.711059 sshd[5623]: Accepted publickey for core from 172.24.4.1 port 46940 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:12.713795 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:12.725642 systemd-logind[1443]: New session 14 of user core. May 8 05:24:12.734316 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 05:24:13.325164 sshd[5623]: pam_unix(sshd:session): session closed for user core May 8 05:24:13.332767 systemd[1]: sshd@11-172.24.4.234:22-172.24.4.1:46940.service: Deactivated successfully. May 8 05:24:13.341713 systemd[1]: session-14.scope: Deactivated successfully. May 8 05:24:13.347384 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. May 8 05:24:13.350582 systemd-logind[1443]: Removed session 14. May 8 05:24:18.359709 systemd[1]: Started sshd@12-172.24.4.234:22-172.24.4.1:36832.service - OpenSSH per-connection server daemon (172.24.4.1:36832). May 8 05:24:19.524889 sshd[5660]: Accepted publickey for core from 172.24.4.1 port 36832 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:19.529319 sshd[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:19.542773 systemd-logind[1443]: New session 15 of user core. May 8 05:24:19.555324 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 05:24:20.408252 sshd[5660]: pam_unix(sshd:session): session closed for user core May 8 05:24:20.425800 systemd[1]: sshd@12-172.24.4.234:22-172.24.4.1:36832.service: Deactivated successfully. May 8 05:24:20.432762 systemd[1]: session-15.scope: Deactivated successfully. May 8 05:24:20.439627 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. May 8 05:24:20.449817 systemd[1]: Started sshd@13-172.24.4.234:22-172.24.4.1:36846.service - OpenSSH per-connection server daemon (172.24.4.1:36846). May 8 05:24:20.455709 systemd-logind[1443]: Removed session 15. May 8 05:24:21.723062 sshd[5680]: Accepted publickey for core from 172.24.4.1 port 36846 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:21.725965 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:21.739129 systemd-logind[1443]: New session 16 of user core. May 8 05:24:21.751379 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 05:24:22.339055 sshd[5680]: pam_unix(sshd:session): session closed for user core May 8 05:24:22.348115 systemd[1]: sshd@13-172.24.4.234:22-172.24.4.1:36846.service: Deactivated successfully. May 8 05:24:22.350636 systemd[1]: session-16.scope: Deactivated successfully. May 8 05:24:22.352855 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. May 8 05:24:22.360391 systemd[1]: Started sshd@14-172.24.4.234:22-172.24.4.1:36852.service - OpenSSH per-connection server daemon (172.24.4.1:36852). May 8 05:24:22.362698 systemd-logind[1443]: Removed session 16. May 8 05:24:23.647010 sshd[5691]: Accepted publickey for core from 172.24.4.1 port 36852 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:23.648602 sshd[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:23.654871 systemd-logind[1443]: New session 17 of user core. May 8 05:24:23.658113 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 05:24:24.382218 sshd[5691]: pam_unix(sshd:session): session closed for user core May 8 05:24:24.390378 systemd[1]: sshd@14-172.24.4.234:22-172.24.4.1:36852.service: Deactivated successfully. May 8 05:24:24.393365 systemd[1]: session-17.scope: Deactivated successfully. May 8 05:24:24.395146 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. May 8 05:24:24.396847 systemd-logind[1443]: Removed session 17. May 8 05:24:29.420699 systemd[1]: Started sshd@15-172.24.4.234:22-172.24.4.1:35290.service - OpenSSH per-connection server daemon (172.24.4.1:35290). May 8 05:24:30.957763 sshd[5728]: Accepted publickey for core from 172.24.4.1 port 35290 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:30.963836 sshd[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:30.984562 systemd-logind[1443]: New session 18 of user core. May 8 05:24:30.998409 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 05:24:31.575853 sshd[5728]: pam_unix(sshd:session): session closed for user core May 8 05:24:31.581869 systemd[1]: sshd@15-172.24.4.234:22-172.24.4.1:35290.service: Deactivated successfully. May 8 05:24:31.590201 systemd[1]: session-18.scope: Deactivated successfully. May 8 05:24:31.594086 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. May 8 05:24:31.596968 systemd-logind[1443]: Removed session 18. May 8 05:24:36.601654 systemd[1]: Started sshd@16-172.24.4.234:22-172.24.4.1:41322.service - OpenSSH per-connection server daemon (172.24.4.1:41322). May 8 05:24:37.923015 sshd[5743]: Accepted publickey for core from 172.24.4.1 port 41322 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:37.927455 sshd[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:37.943185 systemd-logind[1443]: New session 19 of user core. May 8 05:24:37.952339 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 05:24:38.705527 sshd[5743]: pam_unix(sshd:session): session closed for user core May 8 05:24:38.711884 systemd[1]: sshd@16-172.24.4.234:22-172.24.4.1:41322.service: Deactivated successfully. May 8 05:24:38.715707 systemd[1]: session-19.scope: Deactivated successfully. May 8 05:24:38.717551 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. May 8 05:24:38.719580 systemd-logind[1443]: Removed session 19. May 8 05:24:43.734320 systemd[1]: Started sshd@17-172.24.4.234:22-172.24.4.1:51548.service - OpenSSH per-connection server daemon (172.24.4.1:51548). May 8 05:24:44.888026 sshd[5756]: Accepted publickey for core from 172.24.4.1 port 51548 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:44.894646 sshd[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:44.909368 systemd-logind[1443]: New session 20 of user core. May 8 05:24:44.919339 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 05:24:45.630752 sshd[5756]: pam_unix(sshd:session): session closed for user core May 8 05:24:45.647155 systemd[1]: sshd@17-172.24.4.234:22-172.24.4.1:51548.service: Deactivated successfully. May 8 05:24:45.654102 systemd[1]: session-20.scope: Deactivated successfully. May 8 05:24:45.658732 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. May 8 05:24:45.669738 systemd[1]: Started sshd@18-172.24.4.234:22-172.24.4.1:51552.service - OpenSSH per-connection server daemon (172.24.4.1:51552). May 8 05:24:45.674118 systemd-logind[1443]: Removed session 20. May 8 05:24:46.889278 sshd[5804]: Accepted publickey for core from 172.24.4.1 port 51552 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:46.893552 sshd[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:46.907132 systemd-logind[1443]: New session 21 of user core. May 8 05:24:46.918319 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 05:24:48.069455 sshd[5804]: pam_unix(sshd:session): session closed for user core May 8 05:24:48.083495 systemd[1]: sshd@18-172.24.4.234:22-172.24.4.1:51552.service: Deactivated successfully. May 8 05:24:48.091438 systemd[1]: session-21.scope: Deactivated successfully. May 8 05:24:48.094335 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. May 8 05:24:48.105708 systemd[1]: Started sshd@19-172.24.4.234:22-172.24.4.1:51566.service - OpenSSH per-connection server daemon (172.24.4.1:51566). May 8 05:24:48.110434 systemd-logind[1443]: Removed session 21. May 8 05:24:49.423958 sshd[5823]: Accepted publickey for core from 172.24.4.1 port 51566 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:49.428397 sshd[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:49.446149 systemd-logind[1443]: New session 22 of user core. May 8 05:24:49.458401 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 05:24:53.071128 sshd[5823]: pam_unix(sshd:session): session closed for user core May 8 05:24:53.095668 systemd[1]: Started sshd@20-172.24.4.234:22-172.24.4.1:51572.service - OpenSSH per-connection server daemon (172.24.4.1:51572). May 8 05:24:53.096760 systemd[1]: sshd@19-172.24.4.234:22-172.24.4.1:51566.service: Deactivated successfully. May 8 05:24:53.102548 systemd[1]: session-22.scope: Deactivated successfully. May 8 05:24:53.106253 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. May 8 05:24:53.109907 systemd-logind[1443]: Removed session 22. May 8 05:24:54.316075 sshd[5841]: Accepted publickey for core from 172.24.4.1 port 51572 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:54.321196 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:54.338808 systemd-logind[1443]: New session 23 of user core. May 8 05:24:54.348860 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 05:24:55.411531 sshd[5841]: pam_unix(sshd:session): session closed for user core May 8 05:24:55.425608 systemd[1]: sshd@20-172.24.4.234:22-172.24.4.1:51572.service: Deactivated successfully. May 8 05:24:55.432890 systemd[1]: session-23.scope: Deactivated successfully. May 8 05:24:55.449215 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. May 8 05:24:55.455692 systemd[1]: Started sshd@21-172.24.4.234:22-172.24.4.1:53080.service - OpenSSH per-connection server daemon (172.24.4.1:53080). May 8 05:24:55.460616 systemd-logind[1443]: Removed session 23. May 8 05:24:56.627298 sshd[5854]: Accepted publickey for core from 172.24.4.1 port 53080 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:24:56.630345 sshd[5854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:24:56.642116 systemd-logind[1443]: New session 24 of user core. May 8 05:24:56.655307 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 05:24:57.317825 sshd[5854]: pam_unix(sshd:session): session closed for user core May 8 05:24:57.327941 systemd[1]: sshd@21-172.24.4.234:22-172.24.4.1:53080.service: Deactivated successfully. May 8 05:24:57.335272 systemd[1]: session-24.scope: Deactivated successfully. May 8 05:24:57.337871 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. May 8 05:24:57.347901 systemd-logind[1443]: Removed session 24. May 8 05:25:02.341753 systemd[1]: Started sshd@22-172.24.4.234:22-172.24.4.1:53092.service - OpenSSH per-connection server daemon (172.24.4.1:53092). May 8 05:25:03.634036 sshd[5909]: Accepted publickey for core from 172.24.4.1 port 53092 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:25:03.637242 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:25:03.652475 systemd-logind[1443]: New session 25 of user core. May 8 05:25:03.660297 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 05:25:04.456587 sshd[5909]: pam_unix(sshd:session): session closed for user core May 8 05:25:04.470431 systemd[1]: sshd@22-172.24.4.234:22-172.24.4.1:53092.service: Deactivated successfully. May 8 05:25:04.478700 systemd[1]: session-25.scope: Deactivated successfully. May 8 05:25:04.482101 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. May 8 05:25:04.485327 systemd-logind[1443]: Removed session 25. May 8 05:25:09.484946 systemd[1]: Started sshd@23-172.24.4.234:22-172.24.4.1:52252.service - OpenSSH per-connection server daemon (172.24.4.1:52252). May 8 05:25:10.688573 sshd[5924]: Accepted publickey for core from 172.24.4.1 port 52252 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:25:10.693386 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:25:10.706321 systemd-logind[1443]: New session 26 of user core. May 8 05:25:10.716692 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 05:25:11.432791 sshd[5924]: pam_unix(sshd:session): session closed for user core May 8 05:25:11.444790 systemd[1]: sshd@23-172.24.4.234:22-172.24.4.1:52252.service: Deactivated successfully. May 8 05:25:11.454418 systemd[1]: session-26.scope: Deactivated successfully. May 8 05:25:11.491562 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. May 8 05:25:11.493656 systemd-logind[1443]: Removed session 26. May 8 05:25:16.465793 systemd[1]: Started sshd@24-172.24.4.234:22-172.24.4.1:53382.service - OpenSSH per-connection server daemon (172.24.4.1:53382). May 8 05:25:17.589717 sshd[5959]: Accepted publickey for core from 172.24.4.1 port 53382 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:25:17.597053 sshd[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:25:17.615162 systemd-logind[1443]: New session 27 of user core. May 8 05:25:17.625481 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 05:25:18.511283 sshd[5959]: pam_unix(sshd:session): session closed for user core May 8 05:25:18.519704 systemd[1]: sshd@24-172.24.4.234:22-172.24.4.1:53382.service: Deactivated successfully. May 8 05:25:18.526638 systemd[1]: session-27.scope: Deactivated successfully. May 8 05:25:18.541662 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. May 8 05:25:18.544882 systemd-logind[1443]: Removed session 27. May 8 05:25:23.539042 systemd[1]: Started sshd@25-172.24.4.234:22-172.24.4.1:44088.service - OpenSSH per-connection server daemon (172.24.4.1:44088). May 8 05:25:24.969434 sshd[5973]: Accepted publickey for core from 172.24.4.1 port 44088 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:25:24.973792 sshd[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:25:24.988123 systemd-logind[1443]: New session 28 of user core. May 8 05:25:25.000435 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 05:25:25.708824 sshd[5973]: pam_unix(sshd:session): session closed for user core May 8 05:25:25.713957 systemd[1]: sshd@25-172.24.4.234:22-172.24.4.1:44088.service: Deactivated successfully. May 8 05:25:25.717015 systemd[1]: session-28.scope: Deactivated successfully. May 8 05:25:25.718137 systemd-logind[1443]: Session 28 logged out. Waiting for processes to exit. May 8 05:25:25.721276 systemd-logind[1443]: Removed session 28. May 8 05:25:30.745326 systemd[1]: Started sshd@26-172.24.4.234:22-172.24.4.1:44098.service - OpenSSH per-connection server daemon (172.24.4.1:44098). May 8 05:25:31.886385 sshd[6005]: Accepted publickey for core from 172.24.4.1 port 44098 ssh2: RSA SHA256:ScpdhswzPgV8kyIXhMpBXDyS95XWYZ4E3sew5Lv8N40 May 8 05:25:31.892335 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 05:25:31.904810 systemd-logind[1443]: New session 29 of user core. May 8 05:25:31.921345 systemd[1]: Started session-29.scope - Session 29 of User core. May 8 05:25:32.670166 sshd[6005]: pam_unix(sshd:session): session closed for user core May 8 05:25:32.675809 systemd[1]: sshd@26-172.24.4.234:22-172.24.4.1:44098.service: Deactivated successfully. May 8 05:25:32.683357 systemd[1]: session-29.scope: Deactivated successfully. May 8 05:25:32.686678 systemd-logind[1443]: Session 29 logged out. Waiting for processes to exit. May 8 05:25:32.688324 systemd-logind[1443]: Removed session 29.