May 15 16:27:19.916525 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 15 10:42:41 -00 2025 May 15 16:27:19.916553 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 16:27:19.916564 kernel: BIOS-provided physical RAM map: May 15 16:27:19.916574 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 16:27:19.916582 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 16:27:19.916590 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 16:27:19.916599 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 15 16:27:19.916607 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 15 16:27:19.916615 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 16:27:19.916623 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 16:27:19.916632 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 15 16:27:19.916640 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 16:27:19.916649 kernel: NX (Execute Disable) protection: active May 15 16:27:19.916658 kernel: APIC: Static calls initialized May 15 16:27:19.916667 kernel: SMBIOS 3.0.0 present. May 15 16:27:19.916676 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 15 16:27:19.916684 kernel: DMI: Memory slots populated: 1/1 May 15 16:27:19.916694 kernel: Hypervisor detected: KVM May 15 16:27:19.916703 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 16:27:19.916713 kernel: kvm-clock: using sched offset of 4821522219 cycles May 15 16:27:19.916721 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 16:27:19.916729 kernel: tsc: Detected 1996.249 MHz processor May 15 16:27:19.916738 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 16:27:19.916746 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 16:27:19.916755 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 15 16:27:19.916763 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 15 16:27:19.916773 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 16:27:19.916781 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 15 16:27:19.916790 kernel: ACPI: Early table checksum verification disabled May 15 16:27:19.916798 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 15 16:27:19.916806 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:27:19.916814 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:27:19.916822 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:27:19.916830 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 15 16:27:19.916839 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:27:19.916848 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 16:27:19.916856 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 15 16:27:19.916865 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 15 16:27:19.916873 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 15 16:27:19.916881 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 15 16:27:19.916892 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 15 16:27:19.916900 kernel: No NUMA configuration found May 15 16:27:19.916910 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 15 16:27:19.916919 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] May 15 16:27:19.916927 kernel: Zone ranges: May 15 16:27:19.916936 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 16:27:19.916944 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 15 16:27:19.916953 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 15 16:27:19.916961 kernel: Device empty May 15 16:27:19.916969 kernel: Movable zone start for each node May 15 16:27:19.916979 kernel: Early memory node ranges May 15 16:27:19.916988 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 16:27:19.916996 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 15 16:27:19.917005 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 15 16:27:19.917013 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 15 16:27:19.917022 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 16:27:19.917030 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 16:27:19.917038 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 15 16:27:19.917047 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 16:27:19.917057 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 16:27:19.917066 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 16:27:19.917074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 16:27:19.917083 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 16:27:19.917091 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 16:27:19.917099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 16:27:19.917108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 16:27:19.917116 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 16:27:19.917125 kernel: CPU topo: Max. logical packages: 2 May 15 16:27:19.917135 kernel: CPU topo: Max. logical dies: 2 May 15 16:27:19.917143 kernel: CPU topo: Max. dies per package: 1 May 15 16:27:19.917151 kernel: CPU topo: Max. threads per core: 1 May 15 16:27:19.917160 kernel: CPU topo: Num. cores per package: 1 May 15 16:27:19.917168 kernel: CPU topo: Num. threads per package: 1 May 15 16:27:19.917176 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 15 16:27:19.917185 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 15 16:27:19.917193 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 15 16:27:19.917202 kernel: Booting paravirtualized kernel on KVM May 15 16:27:19.917212 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 16:27:19.917221 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 15 16:27:19.917229 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 15 16:27:19.917238 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 15 16:27:19.917246 kernel: pcpu-alloc: [0] 0 1 May 15 16:27:19.917254 kernel: kvm-guest: PV spinlocks disabled, no host support May 15 16:27:19.917264 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 16:27:19.917273 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 16:27:19.917284 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 16:27:19.917293 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 16:27:19.917301 kernel: Fallback order for Node 0: 0 May 15 16:27:19.918393 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 15 16:27:19.918407 kernel: Policy zone: Normal May 15 16:27:19.918416 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 16:27:19.918425 kernel: software IO TLB: area num 2. May 15 16:27:19.918434 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 16:27:19.918443 kernel: ftrace: allocating 40065 entries in 157 pages May 15 16:27:19.918456 kernel: ftrace: allocated 157 pages with 5 groups May 15 16:27:19.918465 kernel: Dynamic Preempt: voluntary May 15 16:27:19.918474 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 16:27:19.918485 kernel: rcu: RCU event tracing is enabled. May 15 16:27:19.918494 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 16:27:19.918504 kernel: Trampoline variant of Tasks RCU enabled. May 15 16:27:19.918513 kernel: Rude variant of Tasks RCU enabled. May 15 16:27:19.918522 kernel: Tracing variant of Tasks RCU enabled. May 15 16:27:19.918531 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 16:27:19.918540 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 16:27:19.918552 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 16:27:19.918562 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 16:27:19.918571 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 15 16:27:19.918580 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 16:27:19.918590 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 16:27:19.918599 kernel: Console: colour VGA+ 80x25 May 15 16:27:19.918608 kernel: printk: legacy console [tty0] enabled May 15 16:27:19.918617 kernel: printk: legacy console [ttyS0] enabled May 15 16:27:19.918626 kernel: ACPI: Core revision 20240827 May 15 16:27:19.918637 kernel: APIC: Switch to symmetric I/O mode setup May 15 16:27:19.918647 kernel: x2apic enabled May 15 16:27:19.918656 kernel: APIC: Switched APIC routing to: physical x2apic May 15 16:27:19.918665 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 16:27:19.918674 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 16:27:19.918690 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 15 16:27:19.918702 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 15 16:27:19.918713 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 15 16:27:19.918722 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 16:27:19.918731 kernel: Spectre V2 : Mitigation: Retpolines May 15 16:27:19.918740 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 15 16:27:19.918751 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 15 16:27:19.918760 kernel: Speculative Store Bypass: Vulnerable May 15 16:27:19.918769 kernel: x86/fpu: x87 FPU will use FXSAVE May 15 16:27:19.918778 kernel: Freeing SMP alternatives memory: 32K May 15 16:27:19.918787 kernel: pid_max: default: 32768 minimum: 301 May 15 16:27:19.918797 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 15 16:27:19.918806 kernel: landlock: Up and running. May 15 16:27:19.918815 kernel: SELinux: Initializing. May 15 16:27:19.918824 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 16:27:19.918833 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 16:27:19.918842 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 15 16:27:19.918851 kernel: Performance Events: AMD PMU driver. May 15 16:27:19.918860 kernel: ... version: 0 May 15 16:27:19.918869 kernel: ... bit width: 48 May 15 16:27:19.918879 kernel: ... generic registers: 4 May 15 16:27:19.918888 kernel: ... value mask: 0000ffffffffffff May 15 16:27:19.918897 kernel: ... max period: 00007fffffffffff May 15 16:27:19.918906 kernel: ... fixed-purpose events: 0 May 15 16:27:19.918915 kernel: ... event mask: 000000000000000f May 15 16:27:19.918924 kernel: signal: max sigframe size: 1440 May 15 16:27:19.918932 kernel: rcu: Hierarchical SRCU implementation. May 15 16:27:19.918941 kernel: rcu: Max phase no-delay instances is 400. May 15 16:27:19.918950 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 15 16:27:19.918959 kernel: smp: Bringing up secondary CPUs ... May 15 16:27:19.918970 kernel: smpboot: x86: Booting SMP configuration: May 15 16:27:19.918979 kernel: .... node #0, CPUs: #1 May 15 16:27:19.918988 kernel: smp: Brought up 1 node, 2 CPUs May 15 16:27:19.919012 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 15 16:27:19.919022 kernel: Memory: 3961268K/4193772K available (14336K kernel code, 2438K rwdata, 9944K rodata, 54416K init, 2544K bss, 227300K reserved, 0K cma-reserved) May 15 16:27:19.919031 kernel: devtmpfs: initialized May 15 16:27:19.919040 kernel: x86/mm: Memory block size: 128MB May 15 16:27:19.919049 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 16:27:19.919058 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 16:27:19.919069 kernel: pinctrl core: initialized pinctrl subsystem May 15 16:27:19.919078 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 16:27:19.919089 kernel: audit: initializing netlink subsys (disabled) May 15 16:27:19.919099 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 16:27:19.919108 kernel: audit: type=2000 audit(1747326436.866:1): state=initialized audit_enabled=0 res=1 May 15 16:27:19.919118 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 16:27:19.919127 kernel: cpuidle: using governor menu May 15 16:27:19.919137 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 16:27:19.919146 kernel: dca service started, version 1.12.1 May 15 16:27:19.919157 kernel: PCI: Using configuration type 1 for base access May 15 16:27:19.919167 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 16:27:19.919177 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 16:27:19.919186 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 15 16:27:19.919196 kernel: ACPI: Added _OSI(Module Device) May 15 16:27:19.919205 kernel: ACPI: Added _OSI(Processor Device) May 15 16:27:19.919215 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 16:27:19.919224 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 16:27:19.919234 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 16:27:19.919245 kernel: ACPI: Interpreter enabled May 15 16:27:19.919254 kernel: ACPI: PM: (supports S0 S3 S5) May 15 16:27:19.919264 kernel: ACPI: Using IOAPIC for interrupt routing May 15 16:27:19.919274 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 16:27:19.919284 kernel: PCI: Using E820 reservations for host bridge windows May 15 16:27:19.919293 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 15 16:27:19.919303 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 16:27:19.920111 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 15 16:27:19.920215 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 15 16:27:19.920306 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 15 16:27:19.920341 kernel: acpiphp: Slot [3] registered May 15 16:27:19.920351 kernel: acpiphp: Slot [4] registered May 15 16:27:19.920361 kernel: acpiphp: Slot [5] registered May 15 16:27:19.920385 kernel: acpiphp: Slot [6] registered May 15 16:27:19.920395 kernel: acpiphp: Slot [7] registered May 15 16:27:19.920404 kernel: acpiphp: Slot [8] registered May 15 16:27:19.920417 kernel: acpiphp: Slot [9] registered May 15 16:27:19.920427 kernel: acpiphp: Slot [10] registered May 15 16:27:19.920437 kernel: acpiphp: Slot [11] registered May 15 16:27:19.920446 kernel: acpiphp: Slot [12] registered May 15 16:27:19.920456 kernel: acpiphp: Slot [13] registered May 15 16:27:19.920465 kernel: acpiphp: Slot [14] registered May 15 16:27:19.920475 kernel: acpiphp: Slot [15] registered May 15 16:27:19.920484 kernel: acpiphp: Slot [16] registered May 15 16:27:19.920494 kernel: acpiphp: Slot [17] registered May 15 16:27:19.920505 kernel: acpiphp: Slot [18] registered May 15 16:27:19.920514 kernel: acpiphp: Slot [19] registered May 15 16:27:19.920524 kernel: acpiphp: Slot [20] registered May 15 16:27:19.920533 kernel: acpiphp: Slot [21] registered May 15 16:27:19.920543 kernel: acpiphp: Slot [22] registered May 15 16:27:19.920552 kernel: acpiphp: Slot [23] registered May 15 16:27:19.920561 kernel: acpiphp: Slot [24] registered May 15 16:27:19.920571 kernel: acpiphp: Slot [25] registered May 15 16:27:19.920580 kernel: acpiphp: Slot [26] registered May 15 16:27:19.920590 kernel: acpiphp: Slot [27] registered May 15 16:27:19.920601 kernel: acpiphp: Slot [28] registered May 15 16:27:19.920610 kernel: acpiphp: Slot [29] registered May 15 16:27:19.920620 kernel: acpiphp: Slot [30] registered May 15 16:27:19.920629 kernel: acpiphp: Slot [31] registered May 15 16:27:19.920638 kernel: PCI host bridge to bus 0000:00 May 15 16:27:19.920743 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 16:27:19.920822 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 16:27:19.920897 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 16:27:19.920976 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 16:27:19.921049 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 15 16:27:19.921122 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 16:27:19.921221 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 15 16:27:19.922107 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 15 16:27:19.922224 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 15 16:27:19.922346 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] May 15 16:27:19.922444 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 15 16:27:19.923561 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 15 16:27:19.923656 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 15 16:27:19.923756 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 15 16:27:19.923891 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 15 16:27:19.923986 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 15 16:27:19.924084 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 15 16:27:19.924186 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 15 16:27:19.924282 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 15 16:27:19.924407 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] May 15 16:27:19.924502 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] May 15 16:27:19.924595 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] May 15 16:27:19.924686 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 16:27:19.924792 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 15 16:27:19.924884 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] May 15 16:27:19.924977 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] May 15 16:27:19.925069 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] May 15 16:27:19.925161 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] May 15 16:27:19.925260 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 15 16:27:19.925382 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] May 15 16:27:19.925476 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] May 15 16:27:19.925567 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] May 15 16:27:19.925674 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 15 16:27:19.925766 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] May 15 16:27:19.925871 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] May 15 16:27:19.926020 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 15 16:27:19.926122 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] May 15 16:27:19.926215 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] May 15 16:27:19.926307 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] May 15 16:27:19.926346 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 16:27:19.926357 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 16:27:19.926367 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 16:27:19.926377 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 16:27:19.926387 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 15 16:27:19.926400 kernel: iommu: Default domain type: Translated May 15 16:27:19.926410 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 16:27:19.926420 kernel: PCI: Using ACPI for IRQ routing May 15 16:27:19.926429 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 16:27:19.926439 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 16:27:19.926449 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 15 16:27:19.926544 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 15 16:27:19.926636 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 15 16:27:19.926727 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 16:27:19.926743 kernel: vgaarb: loaded May 15 16:27:19.926753 kernel: clocksource: Switched to clocksource kvm-clock May 15 16:27:19.926762 kernel: VFS: Disk quotas dquot_6.6.0 May 15 16:27:19.926771 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 16:27:19.926780 kernel: pnp: PnP ACPI init May 15 16:27:19.926866 kernel: pnp 00:03: [dma 2] May 15 16:27:19.926881 kernel: pnp: PnP ACPI: found 5 devices May 15 16:27:19.926890 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 16:27:19.926902 kernel: NET: Registered PF_INET protocol family May 15 16:27:19.926911 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 16:27:19.926921 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 16:27:19.926930 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 16:27:19.926939 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 16:27:19.926948 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 16:27:19.926957 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 16:27:19.926966 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 16:27:19.926975 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 16:27:19.926986 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 16:27:19.926995 kernel: NET: Registered PF_XDP protocol family May 15 16:27:19.927092 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 16:27:19.927174 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 16:27:19.927253 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 16:27:19.927679 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 15 16:27:19.927767 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 15 16:27:19.927866 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 15 16:27:19.927966 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 15 16:27:19.927980 kernel: PCI: CLS 0 bytes, default 64 May 15 16:27:19.927991 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 15 16:27:19.928001 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 15 16:27:19.928011 kernel: Initialise system trusted keyrings May 15 16:27:19.928021 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 16:27:19.928031 kernel: Key type asymmetric registered May 15 16:27:19.928040 kernel: Asymmetric key parser 'x509' registered May 15 16:27:19.928050 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 16:27:19.928063 kernel: io scheduler mq-deadline registered May 15 16:27:19.928073 kernel: io scheduler kyber registered May 15 16:27:19.928082 kernel: io scheduler bfq registered May 15 16:27:19.928093 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 16:27:19.928103 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 15 16:27:19.928114 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 15 16:27:19.928123 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 15 16:27:19.928134 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 15 16:27:19.928144 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 16:27:19.928156 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 16:27:19.928166 kernel: random: crng init done May 15 16:27:19.928175 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 16:27:19.928185 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 16:27:19.928195 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 16:27:19.928205 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 16:27:19.928302 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 16:27:19.928414 kernel: rtc_cmos 00:04: registered as rtc0 May 15 16:27:19.928502 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T16:27:19 UTC (1747326439) May 15 16:27:19.928586 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 15 16:27:19.928615 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 15 16:27:19.928626 kernel: NET: Registered PF_INET6 protocol family May 15 16:27:19.928655 kernel: Segment Routing with IPv6 May 15 16:27:19.928666 kernel: In-situ OAM (IOAM) with IPv6 May 15 16:27:19.928675 kernel: NET: Registered PF_PACKET protocol family May 15 16:27:19.928685 kernel: Key type dns_resolver registered May 15 16:27:19.928695 kernel: IPI shorthand broadcast: enabled May 15 16:27:19.928709 kernel: sched_clock: Marking stable (3589007010, 185320677)->(3806958489, -32630802) May 15 16:27:19.928719 kernel: registered taskstats version 1 May 15 16:27:19.928740 kernel: Loading compiled-in X.509 certificates May 15 16:27:19.928769 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 05e05785144663be6df1db78301487421c4773b6' May 15 16:27:19.928779 kernel: Demotion targets for Node 0: null May 15 16:27:19.928788 kernel: Key type .fscrypt registered May 15 16:27:19.928797 kernel: Key type fscrypt-provisioning registered May 15 16:27:19.928806 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 16:27:19.928818 kernel: ima: Allocated hash algorithm: sha1 May 15 16:27:19.928827 kernel: ima: No architecture policies found May 15 16:27:19.928836 kernel: clk: Disabling unused clocks May 15 16:27:19.928845 kernel: Warning: unable to open an initial console. May 15 16:27:19.928854 kernel: Freeing unused kernel image (initmem) memory: 54416K May 15 16:27:19.928863 kernel: Write protecting the kernel read-only data: 24576k May 15 16:27:19.928873 kernel: Freeing unused kernel image (rodata/data gap) memory: 296K May 15 16:27:19.928882 kernel: Run /init as init process May 15 16:27:19.928891 kernel: with arguments: May 15 16:27:19.928901 kernel: /init May 15 16:27:19.928910 kernel: with environment: May 15 16:27:19.928919 kernel: HOME=/ May 15 16:27:19.928928 kernel: TERM=linux May 15 16:27:19.928936 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 16:27:19.928947 systemd[1]: Successfully made /usr/ read-only. May 15 16:27:19.928960 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 16:27:19.928979 systemd[1]: Detected virtualization kvm. May 15 16:27:19.928989 systemd[1]: Detected architecture x86-64. May 15 16:27:19.928998 systemd[1]: Running in initrd. May 15 16:27:19.929008 systemd[1]: No hostname configured, using default hostname. May 15 16:27:19.929018 systemd[1]: Hostname set to . May 15 16:27:19.929028 systemd[1]: Initializing machine ID from VM UUID. May 15 16:27:19.929038 systemd[1]: Queued start job for default target initrd.target. May 15 16:27:19.929050 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 16:27:19.929060 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 16:27:19.929070 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 16:27:19.929080 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 16:27:19.929091 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 16:27:19.929101 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 16:27:19.929114 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 16:27:19.929124 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 16:27:19.929134 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 16:27:19.929144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 16:27:19.929154 systemd[1]: Reached target paths.target - Path Units. May 15 16:27:19.929164 systemd[1]: Reached target slices.target - Slice Units. May 15 16:27:19.929174 systemd[1]: Reached target swap.target - Swaps. May 15 16:27:19.929184 systemd[1]: Reached target timers.target - Timer Units. May 15 16:27:19.929194 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 16:27:19.929205 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 16:27:19.929215 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 16:27:19.929226 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 16:27:19.929236 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 16:27:19.929246 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 16:27:19.929256 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 16:27:19.929266 systemd[1]: Reached target sockets.target - Socket Units. May 15 16:27:19.929276 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 16:27:19.929287 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 16:27:19.929297 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 16:27:19.929324 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 15 16:27:19.929335 systemd[1]: Starting systemd-fsck-usr.service... May 15 16:27:19.929345 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 16:27:19.929356 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 16:27:19.929368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 16:27:19.929378 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 16:27:19.929389 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 16:27:19.929399 systemd[1]: Finished systemd-fsck-usr.service. May 15 16:27:19.929434 systemd-journald[212]: Collecting audit messages is disabled. May 15 16:27:19.929460 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 16:27:19.929471 systemd-journald[212]: Journal started May 15 16:27:19.929496 systemd-journald[212]: Runtime Journal (/run/log/journal/0f2e25be0470451d917012eea91ab438) is 8M, max 78.5M, 70.5M free. May 15 16:27:19.931319 systemd[1]: Started systemd-journald.service - Journal Service. May 15 16:27:19.935363 systemd-modules-load[213]: Inserted module 'overlay' May 15 16:27:19.936196 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 16:27:19.946518 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 16:27:20.002260 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 16:27:20.002289 kernel: Bridge firewalling registered May 15 16:27:19.958770 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 15 16:27:19.969081 systemd-modules-load[213]: Inserted module 'br_netfilter' May 15 16:27:20.001618 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 16:27:20.003030 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:27:20.004678 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 16:27:20.010656 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 16:27:20.014432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 16:27:20.021047 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 16:27:20.028622 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 16:27:20.034447 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 16:27:20.038725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 16:27:20.046428 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 16:27:20.050342 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 16:27:20.074419 dracut-cmdline[255]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=48287e633374b880fa618bd42bee102ae77c50831859c6cedd6ca9e1aec3dd5c May 15 16:27:20.092579 systemd-resolved[245]: Positive Trust Anchors: May 15 16:27:20.092593 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 16:27:20.092639 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 16:27:20.096497 systemd-resolved[245]: Defaulting to hostname 'linux'. May 15 16:27:20.097792 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 16:27:20.099791 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 16:27:20.175390 kernel: SCSI subsystem initialized May 15 16:27:20.190410 kernel: Loading iSCSI transport class v2.0-870. May 15 16:27:20.207373 kernel: iscsi: registered transport (tcp) May 15 16:27:20.238738 kernel: iscsi: registered transport (qla4xxx) May 15 16:27:20.238820 kernel: QLogic iSCSI HBA Driver May 15 16:27:20.271235 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 16:27:20.290136 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 16:27:20.307948 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 16:27:20.422825 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 16:27:20.426859 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 16:27:20.535433 kernel: raid6: sse2x4 gen() 4549 MB/s May 15 16:27:20.553367 kernel: raid6: sse2x2 gen() 11260 MB/s May 15 16:27:20.572460 kernel: raid6: sse2x1 gen() 8182 MB/s May 15 16:27:20.572561 kernel: raid6: using algorithm sse2x2 gen() 11260 MB/s May 15 16:27:20.591799 kernel: raid6: .... xor() 8156 MB/s, rmw enabled May 15 16:27:20.591860 kernel: raid6: using ssse3x2 recovery algorithm May 15 16:27:20.617567 kernel: xor: measuring software checksum speed May 15 16:27:20.617664 kernel: prefetch64-sse : 13182 MB/sec May 15 16:27:20.620343 kernel: generic_sse : 11998 MB/sec May 15 16:27:20.620460 kernel: xor: using function: prefetch64-sse (13182 MB/sec) May 15 16:27:20.853375 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 16:27:20.861917 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 16:27:20.870042 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 16:27:20.932130 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 15 16:27:20.947911 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 16:27:20.955615 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 16:27:20.986390 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation May 15 16:27:21.040809 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 16:27:21.048682 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 16:27:21.167782 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 16:27:21.178196 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 16:27:21.266096 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 15 16:27:21.317660 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 15 16:27:21.317794 kernel: libata version 3.00 loaded. May 15 16:27:21.317809 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 16:27:21.317829 kernel: GPT:17805311 != 20971519 May 15 16:27:21.317841 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 16:27:21.317852 kernel: GPT:17805311 != 20971519 May 15 16:27:21.317863 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 16:27:21.317874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 16:27:21.317886 kernel: ata_piix 0000:00:01.1: version 2.13 May 15 16:27:21.330539 kernel: scsi host0: ata_piix May 15 16:27:21.330676 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 15 16:27:21.330700 kernel: scsi host1: ata_piix May 15 16:27:21.330818 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 May 15 16:27:21.330833 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 May 15 16:27:21.317472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 16:27:21.318277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:27:21.319490 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 16:27:21.321285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 16:27:21.322703 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 16:27:21.405920 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 16:27:21.423082 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 16:27:21.424370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:27:21.436624 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 16:27:21.446955 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 16:27:21.447755 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 16:27:21.452460 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 16:27:21.496770 disk-uuid[563]: Primary Header is updated. May 15 16:27:21.496770 disk-uuid[563]: Secondary Entries is updated. May 15 16:27:21.496770 disk-uuid[563]: Secondary Header is updated. May 15 16:27:21.505240 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 16:27:21.509187 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 16:27:21.514623 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 16:27:21.511059 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 16:27:21.516296 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 16:27:21.525589 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 16:27:21.548101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 16:27:21.584407 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 16:27:22.541397 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 16:27:22.543489 disk-uuid[566]: The operation has completed successfully. May 15 16:27:22.654961 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 16:27:22.655772 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 16:27:22.687456 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 16:27:22.718215 sh[588]: Success May 15 16:27:22.755782 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 16:27:22.756058 kernel: device-mapper: uevent: version 1.0.3 May 15 16:27:22.759345 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 15 16:27:22.777456 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" May 15 16:27:22.869515 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 16:27:22.872397 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 16:27:22.901885 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 16:27:22.909778 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 15 16:27:22.909849 kernel: BTRFS: device fsid 2d504097-db49-4d66-a0d5-eeb665b21004 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (600) May 15 16:27:22.917454 kernel: BTRFS info (device dm-0): first mount of filesystem 2d504097-db49-4d66-a0d5-eeb665b21004 May 15 16:27:22.917585 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 15 16:27:22.919531 kernel: BTRFS info (device dm-0): using free-space-tree May 15 16:27:22.938303 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 16:27:22.940901 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 15 16:27:22.943036 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 16:27:22.946753 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 16:27:22.952259 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 16:27:22.984464 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (623) May 15 16:27:22.989436 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 16:27:22.989570 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 16:27:22.991610 kernel: BTRFS info (device vda6): using free-space-tree May 15 16:27:23.014341 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 16:27:23.015151 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 16:27:23.019566 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 16:27:23.093427 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 16:27:23.097491 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 16:27:23.149858 systemd-networkd[771]: lo: Link UP May 15 16:27:23.150393 systemd-networkd[771]: lo: Gained carrier May 15 16:27:23.152910 systemd-networkd[771]: Enumeration completed May 15 16:27:23.153632 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 16:27:23.155397 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 16:27:23.155403 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 16:27:23.156186 systemd-networkd[771]: eth0: Link UP May 15 16:27:23.156191 systemd-networkd[771]: eth0: Gained carrier May 15 16:27:23.156202 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 16:27:23.157662 systemd[1]: Reached target network.target - Network. May 15 16:27:23.172416 systemd-networkd[771]: eth0: DHCPv4 address 172.24.4.121/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 15 16:27:23.263461 ignition[668]: Ignition 2.21.0 May 15 16:27:23.263479 ignition[668]: Stage: fetch-offline May 15 16:27:23.263536 ignition[668]: no configs at "/usr/lib/ignition/base.d" May 15 16:27:23.263548 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 16:27:23.268098 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 16:27:23.263673 ignition[668]: parsed url from cmdline: "" May 15 16:27:23.263679 ignition[668]: no config URL provided May 15 16:27:23.263685 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" May 15 16:27:23.263695 ignition[668]: no config at "/usr/lib/ignition/user.ign" May 15 16:27:23.263701 ignition[668]: failed to fetch config: resource requires networking May 15 16:27:23.263918 ignition[668]: Ignition finished successfully May 15 16:27:23.272607 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 15 16:27:23.315059 ignition[782]: Ignition 2.21.0 May 15 16:27:23.315092 ignition[782]: Stage: fetch May 15 16:27:23.316553 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 15 16:27:23.316591 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 16:27:23.316981 ignition[782]: parsed url from cmdline: "" May 15 16:27:23.317002 ignition[782]: no config URL provided May 15 16:27:23.317032 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" May 15 16:27:23.317052 ignition[782]: no config at "/usr/lib/ignition/user.ign" May 15 16:27:23.318857 ignition[782]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 15 16:27:23.324012 ignition[782]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 15 16:27:23.324075 ignition[782]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 15 16:27:23.487829 ignition[782]: GET result: OK May 15 16:27:23.488159 ignition[782]: parsing config with SHA512: 11703b1f32b61235f85b70fb7324f82ad641f168e7d8ba66ac8e8da188c9de49f50348840d804ba73129af6a44b5d992f165a836a0154720d5f965a123f4c0a2 May 15 16:27:23.501209 unknown[782]: fetched base config from "system" May 15 16:27:23.501238 unknown[782]: fetched base config from "system" May 15 16:27:23.502654 ignition[782]: fetch: fetch complete May 15 16:27:23.501246 unknown[782]: fetched user config from "openstack" May 15 16:27:23.502662 ignition[782]: fetch: fetch passed May 15 16:27:23.502757 ignition[782]: Ignition finished successfully May 15 16:27:23.510074 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 15 16:27:23.518691 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 16:27:23.574279 ignition[789]: Ignition 2.21.0 May 15 16:27:23.574297 ignition[789]: Stage: kargs May 15 16:27:23.574537 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 15 16:27:23.574553 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 16:27:23.578264 ignition[789]: kargs: kargs passed May 15 16:27:23.578354 ignition[789]: Ignition finished successfully May 15 16:27:23.582292 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 16:27:23.591191 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 16:27:23.682703 ignition[795]: Ignition 2.21.0 May 15 16:27:23.682731 ignition[795]: Stage: disks May 15 16:27:23.682981 ignition[795]: no configs at "/usr/lib/ignition/base.d" May 15 16:27:23.689437 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 16:27:23.683015 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 16:27:23.694275 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 16:27:23.684928 ignition[795]: disks: disks passed May 15 16:27:23.696542 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 16:27:23.685011 ignition[795]: Ignition finished successfully May 15 16:27:23.698441 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 16:27:23.700459 systemd[1]: Reached target sysinit.target - System Initialization. May 15 16:27:23.702275 systemd[1]: Reached target basic.target - Basic System. May 15 16:27:23.709815 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 16:27:23.781186 systemd-fsck[803]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 15 16:27:23.795739 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 16:27:23.802039 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 16:27:24.031413 kernel: EXT4-fs (vda9): mounted filesystem f7dea4bd-2644-4592-b85b-330f322c4d2b r/w with ordered data mode. Quota mode: none. May 15 16:27:24.034283 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 16:27:24.036544 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 16:27:24.042476 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 16:27:24.062366 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 16:27:24.065294 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 16:27:24.069017 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 15 16:27:24.078849 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 16:27:24.105391 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (811) May 15 16:27:24.105486 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 16:27:24.105539 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 16:27:24.105570 kernel: BTRFS info (device vda6): using free-space-tree May 15 16:27:24.078938 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 16:27:24.109790 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 16:27:24.115450 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 16:27:24.128100 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 16:27:24.206379 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:24.212024 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory May 15 16:27:24.219893 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory May 15 16:27:24.225367 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory May 15 16:27:24.232737 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory May 15 16:27:24.382045 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 16:27:24.386402 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 16:27:24.387743 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 16:27:24.410721 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 16:27:24.411821 kernel: BTRFS info (device vda6): last unmount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 16:27:24.425537 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 16:27:24.441501 ignition[930]: INFO : Ignition 2.21.0 May 15 16:27:24.441501 ignition[930]: INFO : Stage: mount May 15 16:27:24.442857 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 16:27:24.442857 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 16:27:24.442857 ignition[930]: INFO : mount: mount passed May 15 16:27:24.442857 ignition[930]: INFO : Ignition finished successfully May 15 16:27:24.444595 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 16:27:25.124018 systemd-networkd[771]: eth0: Gained IPv6LL May 15 16:27:25.248409 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:27.263390 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:31.277355 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:31.286035 coreos-metadata[813]: May 15 16:27:31.285 WARN failed to locate config-drive, using the metadata service API instead May 15 16:27:31.333903 coreos-metadata[813]: May 15 16:27:31.333 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 15 16:27:31.350730 coreos-metadata[813]: May 15 16:27:31.350 INFO Fetch successful May 15 16:27:31.350730 coreos-metadata[813]: May 15 16:27:31.350 INFO wrote hostname ci-4334-0-0-a-855fb07f2a.novalocal to /sysroot/etc/hostname May 15 16:27:31.353253 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 15 16:27:31.353522 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 15 16:27:31.361273 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 16:27:31.407773 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 16:27:31.443397 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (945) May 15 16:27:31.451533 kernel: BTRFS info (device vda6): first mount of filesystem afd0c70c-d15e-448c-8325-f96e3c3ed3a5 May 15 16:27:31.451603 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 16:27:31.455030 kernel: BTRFS info (device vda6): using free-space-tree May 15 16:27:31.468020 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 16:27:31.534064 ignition[963]: INFO : Ignition 2.21.0 May 15 16:27:31.534064 ignition[963]: INFO : Stage: files May 15 16:27:31.537786 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 16:27:31.537786 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 16:27:31.542537 ignition[963]: DEBUG : files: compiled without relabeling support, skipping May 15 16:27:31.545558 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 16:27:31.545558 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 16:27:31.551074 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 16:27:31.552974 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 16:27:31.552974 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 16:27:31.551914 unknown[963]: wrote ssh authorized keys file for user: core May 15 16:27:31.559562 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 16:27:31.562400 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 16:27:31.696961 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 16:27:31.985545 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 16:27:31.987464 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 16:27:31.987464 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 16:27:31.987464 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 16:27:31.987464 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 16:27:31.987464 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 16:27:31.998874 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 16:27:31.998874 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 16:27:31.998874 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 16:27:31.998874 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 16:27:31.998874 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 16:27:31.998874 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 16:27:31.998874 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 16:27:31.998874 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 16:27:31.998874 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 15 16:27:32.730835 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 16:27:34.403034 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 16:27:34.403034 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 16:27:34.412368 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 16:27:34.474341 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 16:27:34.474341 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 16:27:34.481453 ignition[963]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 15 16:27:34.481453 ignition[963]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 15 16:27:34.481453 ignition[963]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 16:27:34.481453 ignition[963]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 16:27:34.481453 ignition[963]: INFO : files: files passed May 15 16:27:34.481453 ignition[963]: INFO : Ignition finished successfully May 15 16:27:34.493159 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 16:27:34.507071 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 16:27:34.511891 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 16:27:34.608720 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 16:27:34.608993 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 16:27:34.637245 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 16:27:34.640737 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 16:27:34.642841 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 16:27:34.644571 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 16:27:34.648447 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 16:27:34.652548 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 16:27:34.755124 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 16:27:34.755389 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 16:27:34.758683 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 16:27:34.761879 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 16:27:34.765576 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 16:27:34.769513 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 16:27:34.818863 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 16:27:34.824503 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 16:27:34.863547 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 16:27:34.865361 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 16:27:34.868407 systemd[1]: Stopped target timers.target - Timer Units. May 15 16:27:34.871283 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 16:27:34.871782 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 16:27:34.874827 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 16:27:34.877906 systemd[1]: Stopped target basic.target - Basic System. May 15 16:27:34.880494 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 16:27:34.882748 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 16:27:34.885819 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 16:27:34.888614 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 15 16:27:34.891701 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 16:27:34.894421 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 16:27:34.897545 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 16:27:34.900436 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 16:27:34.903068 systemd[1]: Stopped target swap.target - Swaps. May 15 16:27:34.905458 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 16:27:34.905940 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 16:27:34.908839 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 16:27:34.910690 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 16:27:34.913762 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 16:27:34.914088 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 16:27:34.916942 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 16:27:34.917458 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 16:27:34.920897 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 16:27:34.921457 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 16:27:34.925149 systemd[1]: ignition-files.service: Deactivated successfully. May 15 16:27:34.925637 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 16:27:34.930510 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 16:27:34.939103 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 16:27:34.941870 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 16:27:34.943522 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 16:27:34.951045 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 16:27:34.951549 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 16:27:34.967975 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 16:27:34.971396 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 16:27:34.986864 ignition[1016]: INFO : Ignition 2.21.0 May 15 16:27:34.986864 ignition[1016]: INFO : Stage: umount May 15 16:27:34.989738 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 16:27:34.989738 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 16:27:34.989738 ignition[1016]: INFO : umount: umount passed May 15 16:27:34.989738 ignition[1016]: INFO : Ignition finished successfully May 15 16:27:34.989511 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 16:27:34.989660 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 16:27:34.990732 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 16:27:34.990816 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 16:27:34.991596 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 16:27:34.991642 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 16:27:34.993416 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 16:27:34.993463 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 15 16:27:34.994436 systemd[1]: Stopped target network.target - Network. May 15 16:27:34.996068 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 16:27:34.996143 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 16:27:34.997144 systemd[1]: Stopped target paths.target - Path Units. May 15 16:27:34.997619 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 16:27:35.002586 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 16:27:35.003377 systemd[1]: Stopped target slices.target - Slice Units. May 15 16:27:35.003888 systemd[1]: Stopped target sockets.target - Socket Units. May 15 16:27:35.005119 systemd[1]: iscsid.socket: Deactivated successfully. May 15 16:27:35.005170 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 16:27:35.006165 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 16:27:35.006200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 16:27:35.007374 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 16:27:35.007431 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 16:27:35.008641 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 16:27:35.008683 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 16:27:35.009793 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 16:27:35.011083 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 16:27:35.013773 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 16:27:35.017479 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 16:27:35.017591 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 16:27:35.022161 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 16:27:35.022293 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 16:27:35.025795 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 16:27:35.025981 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 16:27:35.026101 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 16:27:35.028163 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 16:27:35.029242 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 15 16:27:35.030576 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 16:27:35.030642 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 16:27:35.031811 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 16:27:35.031863 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 16:27:35.033743 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 16:27:35.035635 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 16:27:35.035689 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 16:27:35.037744 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 16:27:35.037825 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 16:27:35.039330 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 16:27:35.039386 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 16:27:35.040959 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 16:27:35.041009 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 16:27:35.042477 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 16:27:35.047044 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 16:27:35.047112 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 16:27:35.051660 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 16:27:35.053433 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 16:27:35.055925 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 16:27:35.055978 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 16:27:35.057995 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 16:27:35.058030 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 16:27:35.059176 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 16:27:35.059233 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 16:27:35.060989 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 16:27:35.061034 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 16:27:35.062027 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 16:27:35.062076 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 16:27:35.065899 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 16:27:35.067158 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 15 16:27:35.067211 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 15 16:27:35.068886 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 16:27:35.068933 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 16:27:35.071361 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 16:27:35.071430 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 16:27:35.073379 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 16:27:35.073422 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 16:27:35.074143 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 16:27:35.074208 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:27:35.078689 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 15 16:27:35.078744 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 15 16:27:35.078785 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 16:27:35.078824 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 16:27:35.079178 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 16:27:35.079286 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 16:27:35.084904 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 16:27:35.084990 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 16:27:35.086554 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 16:27:35.088223 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 16:27:35.110268 systemd[1]: Switching root. May 15 16:27:35.154782 systemd-journald[212]: Journal stopped May 15 16:27:36.895277 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). May 15 16:27:36.898655 kernel: SELinux: policy capability network_peer_controls=1 May 15 16:27:36.898687 kernel: SELinux: policy capability open_perms=1 May 15 16:27:36.898705 kernel: SELinux: policy capability extended_socket_class=1 May 15 16:27:36.898744 kernel: SELinux: policy capability always_check_network=0 May 15 16:27:36.898757 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 16:27:36.898772 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 16:27:36.898787 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 16:27:36.898804 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 16:27:36.898843 kernel: SELinux: policy capability userspace_initial_context=0 May 15 16:27:36.898860 kernel: audit: type=1403 audit(1747326455.589:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 16:27:36.898882 systemd[1]: Successfully loaded SELinux policy in 60.576ms. May 15 16:27:36.898904 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.937ms. May 15 16:27:36.898926 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 16:27:36.898944 systemd[1]: Detected virtualization kvm. May 15 16:27:36.898976 systemd[1]: Detected architecture x86-64. May 15 16:27:36.898994 systemd[1]: Detected first boot. May 15 16:27:36.899041 systemd[1]: Hostname set to . May 15 16:27:36.899061 systemd[1]: Initializing machine ID from VM UUID. May 15 16:27:36.899075 zram_generator::config[1060]: No configuration found. May 15 16:27:36.899101 kernel: Guest personality initialized and is inactive May 15 16:27:36.899118 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 15 16:27:36.899131 kernel: Initialized host personality May 15 16:27:36.899143 kernel: NET: Registered PF_VSOCK protocol family May 15 16:27:36.899162 systemd[1]: Populated /etc with preset unit settings. May 15 16:27:36.899182 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 16:27:36.899225 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 16:27:36.899241 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 16:27:36.899260 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 16:27:36.899286 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 16:27:36.902178 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 16:27:36.902209 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 16:27:36.902223 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 16:27:36.902236 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 16:27:36.902279 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 16:27:36.902294 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 16:27:36.902307 systemd[1]: Created slice user.slice - User and Session Slice. May 15 16:27:36.902350 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 16:27:36.902364 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 16:27:36.902378 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 16:27:36.902391 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 16:27:36.902430 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 16:27:36.902445 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 16:27:36.902459 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 15 16:27:36.902472 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 16:27:36.902484 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 16:27:36.902498 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 16:27:36.902510 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 16:27:36.902522 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 16:27:36.902555 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 16:27:36.902571 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 16:27:36.902584 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 16:27:36.902597 systemd[1]: Reached target slices.target - Slice Units. May 15 16:27:36.902610 systemd[1]: Reached target swap.target - Swaps. May 15 16:27:36.902622 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 16:27:36.902635 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 16:27:36.902647 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 16:27:36.902660 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 16:27:36.902673 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 16:27:36.902705 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 16:27:36.902720 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 16:27:36.902732 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 16:27:36.902745 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 16:27:36.902757 systemd[1]: Mounting media.mount - External Media Directory... May 15 16:27:36.902770 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:27:36.902790 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 16:27:36.902802 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 16:27:36.902835 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 16:27:36.902850 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 16:27:36.902862 systemd[1]: Reached target machines.target - Containers. May 15 16:27:36.902875 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 16:27:36.902888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 16:27:36.902901 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 16:27:36.902914 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 16:27:36.902927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 16:27:36.902939 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 16:27:36.902989 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 16:27:36.903005 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 16:27:36.903017 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 16:27:36.903030 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 16:27:36.903045 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 16:27:36.903058 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 16:27:36.903072 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 16:27:36.903085 systemd[1]: Stopped systemd-fsck-usr.service. May 15 16:27:36.903123 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 16:27:36.903139 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 16:27:36.903174 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 16:27:36.903189 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 16:27:36.903203 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 16:27:36.903243 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 16:27:36.903258 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 16:27:36.903273 systemd[1]: verity-setup.service: Deactivated successfully. May 15 16:27:36.903286 systemd[1]: Stopped verity-setup.service. May 15 16:27:36.903306 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:27:36.907422 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 16:27:36.907441 kernel: fuse: init (API version 7.41) May 15 16:27:36.907456 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 16:27:36.907470 systemd[1]: Mounted media.mount - External Media Directory. May 15 16:27:36.907484 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 16:27:36.907497 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 16:27:36.907511 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 16:27:36.907525 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 16:27:36.907539 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 16:27:36.907582 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 16:27:36.907598 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 16:27:36.907611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 16:27:36.907624 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 16:27:36.907638 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 16:27:36.907652 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 16:27:36.907666 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 16:27:36.907679 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 16:27:36.907692 kernel: loop: module loaded May 15 16:27:36.907727 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 16:27:36.907742 kernel: ACPI: bus type drm_connector registered May 15 16:27:36.907756 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 16:27:36.907770 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 16:27:36.907784 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 16:27:36.907799 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 16:27:36.907879 systemd-journald[1150]: Collecting audit messages is disabled. May 15 16:27:36.907909 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 16:27:36.907944 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 16:27:36.907959 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 16:27:36.907972 systemd-journald[1150]: Journal started May 15 16:27:36.907998 systemd-journald[1150]: Runtime Journal (/run/log/journal/0f2e25be0470451d917012eea91ab438) is 8M, max 78.5M, 70.5M free. May 15 16:27:36.446178 systemd[1]: Queued start job for default target multi-user.target. May 15 16:27:36.467235 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 16:27:36.467784 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 16:27:36.914356 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 16:27:36.920581 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 16:27:36.923358 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 16:27:36.928350 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 16:27:36.934360 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 16:27:36.947519 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 16:27:36.961398 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 16:27:36.966379 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 16:27:36.969343 systemd[1]: Started systemd-journald.service - Journal Service. May 15 16:27:36.970825 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 16:27:36.971065 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 16:27:36.972080 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 16:27:36.972292 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 16:27:36.973824 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 16:27:36.975445 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 16:27:36.976054 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 16:27:36.980961 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 16:27:36.983767 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 16:27:36.997502 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 16:27:37.015760 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 16:27:37.021561 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 16:27:37.022382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 16:27:37.060406 kernel: loop0: detected capacity change from 0 to 146240 May 15 16:27:37.096703 systemd-journald[1150]: Time spent on flushing to /var/log/journal/0f2e25be0470451d917012eea91ab438 is 29.004ms for 981 entries. May 15 16:27:37.096703 systemd-journald[1150]: System Journal (/var/log/journal/0f2e25be0470451d917012eea91ab438) is 8M, max 584.8M, 576.8M free. May 15 16:27:37.181213 systemd-journald[1150]: Received client request to flush runtime journal. May 15 16:27:37.181279 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 16:27:37.126485 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 16:27:37.133262 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 15 16:27:37.133278 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. May 15 16:27:37.148174 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 16:27:37.151824 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 16:27:37.159363 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 16:27:37.166063 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 16:27:37.184934 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 16:27:37.199369 kernel: loop1: detected capacity change from 0 to 8 May 15 16:27:37.225692 kernel: loop2: detected capacity change from 0 to 205544 May 15 16:27:37.270387 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 16:27:37.275077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 16:27:37.289400 kernel: loop3: detected capacity change from 0 to 113872 May 15 16:27:37.311118 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. May 15 16:27:37.311141 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. May 15 16:27:37.316801 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 16:27:37.346482 kernel: loop4: detected capacity change from 0 to 146240 May 15 16:27:37.423506 kernel: loop5: detected capacity change from 0 to 8 May 15 16:27:37.430338 kernel: loop6: detected capacity change from 0 to 205544 May 15 16:27:37.473735 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 16:27:37.493348 kernel: loop7: detected capacity change from 0 to 113872 May 15 16:27:37.554461 (sd-merge)[1229]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 15 16:27:37.555366 (sd-merge)[1229]: Merged extensions into '/usr'. May 15 16:27:37.564558 systemd[1]: Reload requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... May 15 16:27:37.564605 systemd[1]: Reloading... May 15 16:27:37.704363 zram_generator::config[1255]: No configuration found. May 15 16:27:37.911878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 16:27:38.044634 systemd[1]: Reloading finished in 479 ms. May 15 16:27:38.072015 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 16:27:38.084425 systemd[1]: Starting ensure-sysext.service... May 15 16:27:38.086785 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 16:27:38.127452 systemd[1]: Reload requested from client PID 1310 ('systemctl') (unit ensure-sysext.service)... May 15 16:27:38.127629 systemd[1]: Reloading... May 15 16:27:38.148913 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 16:27:38.151421 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 15 16:27:38.155481 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 15 16:27:38.155984 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 16:27:38.156349 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 16:27:38.160264 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 16:27:38.160772 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. May 15 16:27:38.160836 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. May 15 16:27:38.172866 systemd-tmpfiles[1311]: Detected autofs mount point /boot during canonicalization of boot. May 15 16:27:38.172879 systemd-tmpfiles[1311]: Skipping /boot May 15 16:27:38.191615 systemd-tmpfiles[1311]: Detected autofs mount point /boot during canonicalization of boot. May 15 16:27:38.191628 systemd-tmpfiles[1311]: Skipping /boot May 15 16:27:38.209362 zram_generator::config[1335]: No configuration found. May 15 16:27:38.339332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 16:27:38.442151 systemd[1]: Reloading finished in 313 ms. May 15 16:27:38.464332 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 16:27:38.465966 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 16:27:38.472777 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 16:27:38.485963 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 16:27:38.491672 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 16:27:38.495150 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 16:27:38.503642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 16:27:38.508651 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 16:27:38.514024 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 16:27:38.525773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:27:38.525994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 16:27:38.529864 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 16:27:38.536635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 16:27:38.553693 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 16:27:38.554454 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 16:27:38.554587 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 16:27:38.554721 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:27:38.563805 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:27:38.564080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 16:27:38.564817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 16:27:38.565005 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 16:27:38.570736 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 16:27:38.571338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:27:38.577595 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 16:27:38.581675 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:27:38.583243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 16:27:38.588529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 16:27:38.590510 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 16:27:38.590646 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 16:27:38.590828 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 16:27:38.596555 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 16:27:38.598629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 16:27:38.598827 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 16:27:38.606657 systemd[1]: Finished ensure-sysext.service. May 15 16:27:38.611082 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 16:27:38.617584 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 16:27:38.623492 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 16:27:38.624718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 16:27:38.625408 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 16:27:38.626231 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 16:27:38.633429 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 16:27:38.637889 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 16:27:38.641260 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 16:27:38.645709 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 16:27:38.657671 systemd-udevd[1402]: Using default interface naming scheme 'v255'. May 15 16:27:38.673009 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 16:27:38.684587 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 16:27:38.707848 augenrules[1443]: No rules May 15 16:27:38.710303 systemd[1]: audit-rules.service: Deactivated successfully. May 15 16:27:38.712096 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 16:27:38.725330 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 16:27:38.731933 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 16:27:38.738642 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 16:27:38.740589 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 16:27:38.812636 systemd-resolved[1401]: Positive Trust Anchors: May 15 16:27:38.812653 systemd-resolved[1401]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 16:27:38.812698 systemd-resolved[1401]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 16:27:38.820788 systemd-resolved[1401]: Using system hostname 'ci-4334-0-0-a-855fb07f2a.novalocal'. May 15 16:27:38.823719 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 16:27:38.824701 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 16:27:38.864356 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 16:27:38.865630 systemd[1]: Reached target sysinit.target - System Initialization. May 15 16:27:38.866554 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 16:27:38.868394 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 16:27:38.869064 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 15 16:27:38.869683 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 16:27:38.870604 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 16:27:38.870635 systemd[1]: Reached target paths.target - Path Units. May 15 16:27:38.871572 systemd[1]: Reached target time-set.target - System Time Set. May 15 16:27:38.872697 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 16:27:38.873901 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 16:27:38.874906 systemd[1]: Reached target timers.target - Timer Units. May 15 16:27:38.879540 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 16:27:38.883582 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 16:27:38.890245 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 16:27:38.891587 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 16:27:38.892724 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 16:27:38.902969 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 16:27:38.904859 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 16:27:38.907515 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 16:27:38.909853 systemd[1]: Reached target sockets.target - Socket Units. May 15 16:27:38.911529 systemd[1]: Reached target basic.target - Basic System. May 15 16:27:38.912458 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 16:27:38.912491 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 16:27:38.914604 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 15 16:27:38.919599 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 16:27:38.927581 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 16:27:38.932364 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:38.933550 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 16:27:38.940633 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 16:27:38.941565 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 16:27:38.945139 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 15 16:27:38.952235 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 16:27:38.957483 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 16:27:38.967639 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 16:27:38.972447 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 16:27:38.974758 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing passwd entry cache May 15 16:27:38.974775 oslogin_cache_refresh[1492]: Refreshing passwd entry cache May 15 16:27:38.992348 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting users, quitting May 15 16:27:38.992348 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 16:27:38.992348 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing group entry cache May 15 16:27:38.992348 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting groups, quitting May 15 16:27:38.992348 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 16:27:38.984467 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 16:27:38.981525 oslogin_cache_refresh[1492]: Failure getting users, quitting May 15 16:27:38.986834 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 16:27:38.981557 oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 15 16:27:38.988566 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 16:27:38.981693 oslogin_cache_refresh[1492]: Refreshing group entry cache May 15 16:27:38.983447 oslogin_cache_refresh[1492]: Failure getting groups, quitting May 15 16:27:38.983457 oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 15 16:27:38.993436 systemd[1]: Starting update-engine.service - Update Engine... May 15 16:27:39.006611 jq[1489]: false May 15 16:27:39.007877 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 16:27:39.011393 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 16:27:39.012449 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 15 16:27:39.012665 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 15 16:27:39.023424 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 16:27:39.024385 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 16:27:39.072719 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 16:27:39.072997 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 16:27:39.087551 jq[1499]: true May 15 16:27:39.114477 systemd-networkd[1454]: lo: Link UP May 15 16:27:39.114487 systemd-networkd[1454]: lo: Gained carrier May 15 16:27:39.116844 update_engine[1498]: I20250515 16:27:39.116740 1498 main.cc:92] Flatcar Update Engine starting May 15 16:27:39.120282 dbus-daemon[1486]: [system] SELinux support is enabled May 15 16:27:39.120513 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 16:27:39.121806 tar[1505]: linux-amd64/helm May 15 16:27:39.126196 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 16:27:39.126236 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 16:27:39.128138 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 16:27:39.128165 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 16:27:39.129559 systemd[1]: Started update-engine.service - Update Engine. May 15 16:27:39.136216 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 16:27:39.141379 update_engine[1498]: I20250515 16:27:39.136029 1498 update_check_scheduler.cc:74] Next update check in 6m49s May 15 16:27:39.150284 jq[1518]: true May 15 16:27:39.155472 systemd-logind[1497]: New seat seat0. May 15 16:27:39.156449 systemd[1]: Started systemd-logind.service - User Login Management. May 15 16:27:39.164642 systemd[1]: motdgen.service: Deactivated successfully. May 15 16:27:39.164879 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 16:27:39.166441 systemd-networkd[1454]: Enumeration completed May 15 16:27:39.166624 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 16:27:39.167644 systemd[1]: Reached target network.target - Network. May 15 16:27:39.170084 systemd[1]: Starting containerd.service - containerd container runtime... May 15 16:27:39.172811 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 16:27:39.178026 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 16:27:39.229058 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 16:27:39.241825 (ntainerd)[1540]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 16:27:39.285036 bash[1551]: Updated "/home/core/.ssh/authorized_keys" May 15 16:27:39.285604 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 16:27:39.292676 systemd[1]: Starting sshkeys.service... May 15 16:27:39.327130 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 16:27:39.327142 systemd-networkd[1454]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 16:27:39.333379 systemd-networkd[1454]: eth0: Link UP May 15 16:27:39.335657 systemd-networkd[1454]: eth0: Gained carrier May 15 16:27:39.336376 systemd-networkd[1454]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 16:27:39.357803 systemd-networkd[1454]: eth0: DHCPv4 address 172.24.4.121/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 15 16:27:39.368600 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 15 16:27:39.378880 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 15 16:27:39.385890 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 15 16:27:39.450281 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:39.489972 extend-filesystems[1491]: Found loop4 May 15 16:27:39.499844 extend-filesystems[1491]: Found loop5 May 15 16:27:39.499844 extend-filesystems[1491]: Found loop6 May 15 16:27:39.499844 extend-filesystems[1491]: Found loop7 May 15 16:27:39.499844 extend-filesystems[1491]: Found vda May 15 16:27:39.499844 extend-filesystems[1491]: Found vda1 May 15 16:27:39.499844 extend-filesystems[1491]: Found vda2 May 15 16:27:39.499844 extend-filesystems[1491]: Found vda3 May 15 16:27:39.499844 extend-filesystems[1491]: Found usr May 15 16:27:39.499844 extend-filesystems[1491]: Found vda4 May 15 16:27:39.499844 extend-filesystems[1491]: Found vda6 May 15 16:27:39.499844 extend-filesystems[1491]: Found vda7 May 15 16:27:39.499844 extend-filesystems[1491]: Found vda9 May 15 16:27:39.492136 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 16:27:39.493482 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 16:27:39.580595 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 15 16:27:39.655254 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 16:27:39.663897 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 16:27:39.672291 containerd[1540]: time="2025-05-15T16:27:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 16:27:39.676805 containerd[1540]: time="2025-05-15T16:27:39.676763304Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 15 16:27:39.694645 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 16:27:39.779813 containerd[1540]: time="2025-05-15T16:27:39.776672935Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="55.484µs" May 15 16:27:39.782025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 16:27:39.786639 containerd[1540]: time="2025-05-15T16:27:39.786499334Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 16:27:39.790248 containerd[1540]: time="2025-05-15T16:27:39.786813664Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 16:27:39.790248 containerd[1540]: time="2025-05-15T16:27:39.787304574Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 16:27:39.790248 containerd[1540]: time="2025-05-15T16:27:39.787351442Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 16:27:39.790248 containerd[1540]: time="2025-05-15T16:27:39.787434428Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 16:27:39.790248 containerd[1540]: time="2025-05-15T16:27:39.787595941Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 16:27:39.790248 containerd[1540]: time="2025-05-15T16:27:39.787616650Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 16:27:39.790248 containerd[1540]: time="2025-05-15T16:27:39.787933474Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 16:27:39.790248 containerd[1540]: time="2025-05-15T16:27:39.787981454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 16:27:39.790248 containerd[1540]: time="2025-05-15T16:27:39.787996552Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 16:27:39.790248 containerd[1540]: time="2025-05-15T16:27:39.788007623Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 16:27:39.790248 containerd[1540]: time="2025-05-15T16:27:39.788094275Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 16:27:39.795431 containerd[1540]: time="2025-05-15T16:27:39.795398344Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 16:27:39.795906 containerd[1540]: time="2025-05-15T16:27:39.795883524Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 16:27:39.795973 containerd[1540]: time="2025-05-15T16:27:39.795957914Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 16:27:39.796420 containerd[1540]: time="2025-05-15T16:27:39.796368464Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 16:27:39.798341 containerd[1540]: time="2025-05-15T16:27:39.797105867Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 16:27:39.798341 containerd[1540]: time="2025-05-15T16:27:39.797192559Z" level=info msg="metadata content store policy set" policy=shared May 15 16:27:39.815232 containerd[1540]: time="2025-05-15T16:27:39.815192709Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 16:27:39.815556 containerd[1540]: time="2025-05-15T16:27:39.815537235Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 16:27:39.815689 containerd[1540]: time="2025-05-15T16:27:39.815670815Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 16:27:39.815796 containerd[1540]: time="2025-05-15T16:27:39.815778577Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816092566Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816110630Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816130227Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816144874Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816159582Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816182866Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816195760Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816210477Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816428166Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816453894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816474743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816488940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816530598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 16:27:39.817767 containerd[1540]: time="2025-05-15T16:27:39.816544594Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 16:27:39.818200 containerd[1540]: time="2025-05-15T16:27:39.816556797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 16:27:39.818200 containerd[1540]: time="2025-05-15T16:27:39.816567697Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 16:27:39.818200 containerd[1540]: time="2025-05-15T16:27:39.816582876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 16:27:39.818200 containerd[1540]: time="2025-05-15T16:27:39.816621358Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 16:27:39.818200 containerd[1540]: time="2025-05-15T16:27:39.816634282Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 16:27:39.818200 containerd[1540]: time="2025-05-15T16:27:39.816719262Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 16:27:39.818200 containerd[1540]: time="2025-05-15T16:27:39.816737195Z" level=info msg="Start snapshots syncer" May 15 16:27:39.818911 containerd[1540]: time="2025-05-15T16:27:39.818530188Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 16:27:39.820034 containerd[1540]: time="2025-05-15T16:27:39.818862431Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 16:27:39.822375 containerd[1540]: time="2025-05-15T16:27:39.820375529Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 16:27:39.829286 containerd[1540]: time="2025-05-15T16:27:39.829238321Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 16:27:39.829543 containerd[1540]: time="2025-05-15T16:27:39.829509540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 16:27:39.829585 containerd[1540]: time="2025-05-15T16:27:39.829548934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 16:27:39.829585 containerd[1540]: time="2025-05-15T16:27:39.829564894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 16:27:39.829585 containerd[1540]: time="2025-05-15T16:27:39.829579030Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 16:27:39.829738 containerd[1540]: time="2025-05-15T16:27:39.829594339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 16:27:39.829738 containerd[1540]: time="2025-05-15T16:27:39.829607724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 16:27:39.829738 containerd[1540]: time="2025-05-15T16:27:39.829621690Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 16:27:39.829738 containerd[1540]: time="2025-05-15T16:27:39.829662597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 16:27:39.829738 containerd[1540]: time="2025-05-15T16:27:39.829676523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 16:27:39.829738 containerd[1540]: time="2025-05-15T16:27:39.829689307Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 16:27:39.835476 containerd[1540]: time="2025-05-15T16:27:39.835422650Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 16:27:39.835476 containerd[1540]: time="2025-05-15T16:27:39.835471201Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 16:27:39.835549 containerd[1540]: time="2025-05-15T16:27:39.835485528Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 16:27:39.835549 containerd[1540]: time="2025-05-15T16:27:39.835498332Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 16:27:39.835549 containerd[1540]: time="2025-05-15T16:27:39.835508330Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 16:27:39.835549 containerd[1540]: time="2025-05-15T16:27:39.835519001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 16:27:39.835549 containerd[1540]: time="2025-05-15T16:27:39.835532075Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 16:27:39.835730 containerd[1540]: time="2025-05-15T16:27:39.835562111Z" level=info msg="runtime interface created" May 15 16:27:39.835730 containerd[1540]: time="2025-05-15T16:27:39.835570838Z" level=info msg="created NRI interface" May 15 16:27:39.835730 containerd[1540]: time="2025-05-15T16:27:39.835584143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 16:27:39.835730 containerd[1540]: time="2025-05-15T16:27:39.835606815Z" level=info msg="Connect containerd service" May 15 16:27:39.835730 containerd[1540]: time="2025-05-15T16:27:39.835653803Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 16:27:39.839757 containerd[1540]: time="2025-05-15T16:27:39.838088099Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 16:27:39.860171 kernel: mousedev: PS/2 mouse device common for all mice May 15 16:27:39.915530 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 15 16:27:39.929722 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 15 16:27:39.929780 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 16:27:40.084897 sshd_keygen[1521]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 16:27:40.094069 containerd[1540]: time="2025-05-15T16:27:40.094029176Z" level=info msg="Start subscribing containerd event" May 15 16:27:40.094226 containerd[1540]: time="2025-05-15T16:27:40.094196951Z" level=info msg="Start recovering state" May 15 16:27:40.094423 containerd[1540]: time="2025-05-15T16:27:40.094405702Z" level=info msg="Start event monitor" May 15 16:27:40.094549 containerd[1540]: time="2025-05-15T16:27:40.094524405Z" level=info msg="Start cni network conf syncer for default" May 15 16:27:40.094609 containerd[1540]: time="2025-05-15T16:27:40.094596671Z" level=info msg="Start streaming server" May 15 16:27:40.094680 containerd[1540]: time="2025-05-15T16:27:40.094666371Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 16:27:40.094746 containerd[1540]: time="2025-05-15T16:27:40.094727336Z" level=info msg="runtime interface starting up..." May 15 16:27:40.094802 containerd[1540]: time="2025-05-15T16:27:40.094790815Z" level=info msg="starting plugins..." May 15 16:27:40.094865 containerd[1540]: time="2025-05-15T16:27:40.094852631Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 16:27:40.095428 containerd[1540]: time="2025-05-15T16:27:40.095210993Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 16:27:40.095598 containerd[1540]: time="2025-05-15T16:27:40.095557352Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 16:27:40.095964 systemd[1]: Started containerd.service - containerd container runtime. May 15 16:27:40.122192 containerd[1540]: time="2025-05-15T16:27:40.097850824Z" level=info msg="containerd successfully booted in 0.426957s" May 15 16:27:40.161363 kernel: ACPI: button: Power Button [PWRF] May 15 16:27:40.182695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 16:27:40.190415 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 16:27:40.197623 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 16:27:40.231449 systemd[1]: issuegen.service: Deactivated successfully. May 15 16:27:40.231730 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 16:27:40.237400 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 16:27:40.282388 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 16:27:40.287906 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 16:27:40.291221 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 16:27:40.292440 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 16:27:40.295347 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 15 16:27:40.298453 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 15 16:27:40.299259 systemd[1]: Reached target getty.target - Login Prompts. May 15 16:27:40.300287 systemd-logind[1497]: Watching system buttons on /dev/input/event2 (Power Button) May 15 16:27:40.308602 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 15 16:27:40.310790 systemd[1]: Started sshd@0-172.24.4.121:22-172.24.4.1:34206.service - OpenSSH per-connection server daemon (172.24.4.1:34206). May 15 16:27:40.330606 kernel: Console: switching to colour dummy device 80x25 May 15 16:27:40.337835 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 15 16:27:40.337919 kernel: [drm] features: -context_init May 15 16:27:40.339738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 16:27:40.346543 kernel: [drm] number of scanouts: 1 May 15 16:27:40.346629 kernel: [drm] number of cap sets: 0 May 15 16:27:40.341014 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:27:40.344927 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 16:27:40.353749 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 16:27:40.357466 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 15 16:27:40.417296 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 16:27:40.550115 systemd-networkd[1454]: eth0: Gained IPv6LL May 15 16:27:40.560641 tar[1505]: linux-amd64/LICENSE May 15 16:27:40.560641 tar[1505]: linux-amd64/README.md May 15 16:27:40.560036 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. May 15 16:27:40.564562 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 16:27:40.568055 systemd[1]: Reached target network-online.target - Network is Online. May 15 16:27:40.573642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:27:40.576584 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 16:27:40.590593 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 16:27:40.623951 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 16:27:41.389654 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:41.397861 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:41.403275 sshd[1619]: Accepted publickey for core from 172.24.4.1 port 34206 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:27:41.412550 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:27:41.463894 systemd-logind[1497]: New session 1 of user core. May 15 16:27:41.470040 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 16:27:41.473840 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 16:27:41.502842 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 16:27:41.508586 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 16:27:41.527175 (systemd)[1650]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 16:27:41.532075 systemd-logind[1497]: New session c1 of user core. May 15 16:27:41.723634 systemd[1650]: Queued start job for default target default.target. May 15 16:27:41.735676 systemd[1650]: Created slice app.slice - User Application Slice. May 15 16:27:41.735709 systemd[1650]: Reached target paths.target - Paths. May 15 16:27:41.735754 systemd[1650]: Reached target timers.target - Timers. May 15 16:27:41.741491 systemd[1650]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 16:27:41.754824 systemd[1650]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 16:27:41.754977 systemd[1650]: Reached target sockets.target - Sockets. May 15 16:27:41.755024 systemd[1650]: Reached target basic.target - Basic System. May 15 16:27:41.755064 systemd[1650]: Reached target default.target - Main User Target. May 15 16:27:41.755145 systemd[1650]: Startup finished in 210ms. May 15 16:27:41.755425 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 16:27:41.766607 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 16:27:42.285027 systemd[1]: Started sshd@1-172.24.4.121:22-172.24.4.1:34212.service - OpenSSH per-connection server daemon (172.24.4.1:34212). May 15 16:27:42.548091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:27:42.568297 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 16:27:43.430612 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:43.432641 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:43.905470 kubelet[1667]: E0515 16:27:43.904988 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 16:27:43.912668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 16:27:43.913526 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 16:27:43.916529 systemd[1]: kubelet.service: Consumed 1.932s CPU time, 236.9M memory peak. May 15 16:27:44.385877 sshd[1661]: Accepted publickey for core from 172.24.4.1 port 34212 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:27:44.391859 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:27:44.408627 systemd-logind[1497]: New session 2 of user core. May 15 16:27:44.426095 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 16:27:45.026462 sshd[1678]: Connection closed by 172.24.4.1 port 34212 May 15 16:27:45.027169 sshd-session[1661]: pam_unix(sshd:session): session closed for user core May 15 16:27:45.047994 systemd[1]: sshd@1-172.24.4.121:22-172.24.4.1:34212.service: Deactivated successfully. May 15 16:27:45.053005 systemd[1]: session-2.scope: Deactivated successfully. May 15 16:27:45.055866 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. May 15 16:27:45.060704 systemd-logind[1497]: Removed session 2. May 15 16:27:45.065010 systemd[1]: Started sshd@2-172.24.4.121:22-172.24.4.1:59448.service - OpenSSH per-connection server daemon (172.24.4.1:59448). May 15 16:27:45.375445 login[1617]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 15 16:27:45.386539 login[1618]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 15 16:27:45.387360 systemd-logind[1497]: New session 3 of user core. May 15 16:27:45.400869 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 16:27:45.411543 systemd-logind[1497]: New session 4 of user core. May 15 16:27:45.418600 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 16:27:46.318675 sshd[1684]: Accepted publickey for core from 172.24.4.1 port 59448 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:27:46.320754 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:27:46.333415 systemd-logind[1497]: New session 5 of user core. May 15 16:27:46.342859 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 16:27:46.958457 sshd[1712]: Connection closed by 172.24.4.1 port 59448 May 15 16:27:46.959763 sshd-session[1684]: pam_unix(sshd:session): session closed for user core May 15 16:27:46.967632 systemd[1]: sshd@2-172.24.4.121:22-172.24.4.1:59448.service: Deactivated successfully. May 15 16:27:46.971991 systemd[1]: session-5.scope: Deactivated successfully. May 15 16:27:46.975500 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. May 15 16:27:46.980219 systemd-logind[1497]: Removed session 5. May 15 16:27:47.521383 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:47.528389 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 15 16:27:47.544368 coreos-metadata[1485]: May 15 16:27:47.543 WARN failed to locate config-drive, using the metadata service API instead May 15 16:27:47.545687 coreos-metadata[1554]: May 15 16:27:47.545 WARN failed to locate config-drive, using the metadata service API instead May 15 16:27:47.604516 coreos-metadata[1554]: May 15 16:27:47.602 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 15 16:27:47.607272 coreos-metadata[1485]: May 15 16:27:47.603 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 15 16:27:47.803125 coreos-metadata[1554]: May 15 16:27:47.802 INFO Fetch successful May 15 16:27:47.803125 coreos-metadata[1554]: May 15 16:27:47.802 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 15 16:27:47.820895 coreos-metadata[1554]: May 15 16:27:47.820 INFO Fetch successful May 15 16:27:47.828634 unknown[1554]: wrote ssh authorized keys file for user: core May 15 16:27:47.892011 update-ssh-keys[1722]: Updated "/home/core/.ssh/authorized_keys" May 15 16:27:47.894910 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 15 16:27:47.901756 systemd[1]: Finished sshkeys.service. May 15 16:27:48.032651 coreos-metadata[1485]: May 15 16:27:48.032 INFO Fetch successful May 15 16:27:48.033209 coreos-metadata[1485]: May 15 16:27:48.033 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 15 16:27:48.047947 coreos-metadata[1485]: May 15 16:27:48.047 INFO Fetch successful May 15 16:27:48.048367 coreos-metadata[1485]: May 15 16:27:48.048 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 15 16:27:48.064049 coreos-metadata[1485]: May 15 16:27:48.063 INFO Fetch successful May 15 16:27:48.064360 coreos-metadata[1485]: May 15 16:27:48.064 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 15 16:27:48.078561 coreos-metadata[1485]: May 15 16:27:48.078 INFO Fetch successful May 15 16:27:48.078912 coreos-metadata[1485]: May 15 16:27:48.078 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 15 16:27:48.092418 coreos-metadata[1485]: May 15 16:27:48.092 INFO Fetch successful May 15 16:27:48.092766 coreos-metadata[1485]: May 15 16:27:48.092 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 15 16:27:48.111184 coreos-metadata[1485]: May 15 16:27:48.111 INFO Fetch successful May 15 16:27:48.173915 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 15 16:27:48.175863 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 16:27:48.176423 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 16:27:48.177229 systemd[1]: Startup finished in 3.705s (kernel) + 15.898s (initrd) + 12.647s (userspace) = 32.251s. May 15 16:27:54.163939 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 16:27:54.168818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:27:54.598557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:27:54.613427 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 16:27:54.749823 kubelet[1738]: E0515 16:27:54.749715 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 16:27:54.759909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 16:27:54.760500 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 16:27:54.762195 systemd[1]: kubelet.service: Consumed 409ms CPU time, 93.8M memory peak. May 15 16:27:56.986494 systemd[1]: Started sshd@3-172.24.4.121:22-172.24.4.1:49442.service - OpenSSH per-connection server daemon (172.24.4.1:49442). May 15 16:27:58.284055 sshd[1746]: Accepted publickey for core from 172.24.4.1 port 49442 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:27:58.287055 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:27:58.300007 systemd-logind[1497]: New session 6 of user core. May 15 16:27:58.320960 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 16:27:59.044348 sshd[1748]: Connection closed by 172.24.4.1 port 49442 May 15 16:27:59.044330 sshd-session[1746]: pam_unix(sshd:session): session closed for user core May 15 16:27:59.056808 systemd[1]: sshd@3-172.24.4.121:22-172.24.4.1:49442.service: Deactivated successfully. May 15 16:27:59.058902 systemd[1]: session-6.scope: Deactivated successfully. May 15 16:27:59.060145 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. May 15 16:27:59.063551 systemd[1]: Started sshd@4-172.24.4.121:22-172.24.4.1:49458.service - OpenSSH per-connection server daemon (172.24.4.1:49458). May 15 16:27:59.065426 systemd-logind[1497]: Removed session 6. May 15 16:28:00.419700 sshd[1754]: Accepted publickey for core from 172.24.4.1 port 49458 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:28:00.422366 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:28:00.443514 systemd-logind[1497]: New session 7 of user core. May 15 16:28:00.459895 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 16:28:01.062656 sshd[1756]: Connection closed by 172.24.4.1 port 49458 May 15 16:28:01.064021 sshd-session[1754]: pam_unix(sshd:session): session closed for user core May 15 16:28:01.075805 systemd[1]: sshd@4-172.24.4.121:22-172.24.4.1:49458.service: Deactivated successfully. May 15 16:28:01.079548 systemd[1]: session-7.scope: Deactivated successfully. May 15 16:28:01.083395 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. May 15 16:28:01.088650 systemd[1]: Started sshd@5-172.24.4.121:22-172.24.4.1:49468.service - OpenSSH per-connection server daemon (172.24.4.1:49468). May 15 16:28:01.091668 systemd-logind[1497]: Removed session 7. May 15 16:28:02.742537 sshd[1762]: Accepted publickey for core from 172.24.4.1 port 49468 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:28:02.745167 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:28:02.758439 systemd-logind[1497]: New session 8 of user core. May 15 16:28:02.764639 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 16:28:03.640385 sshd[1764]: Connection closed by 172.24.4.1 port 49468 May 15 16:28:03.640738 sshd-session[1762]: pam_unix(sshd:session): session closed for user core May 15 16:28:03.656617 systemd[1]: sshd@5-172.24.4.121:22-172.24.4.1:49468.service: Deactivated successfully. May 15 16:28:03.660421 systemd[1]: session-8.scope: Deactivated successfully. May 15 16:28:03.662852 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. May 15 16:28:03.669442 systemd[1]: Started sshd@6-172.24.4.121:22-172.24.4.1:41210.service - OpenSSH per-connection server daemon (172.24.4.1:41210). May 15 16:28:03.671491 systemd-logind[1497]: Removed session 8. May 15 16:28:04.965380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 16:28:04.972405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:28:05.249927 sshd[1770]: Accepted publickey for core from 172.24.4.1 port 41210 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:28:05.253672 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:28:05.267301 systemd-logind[1497]: New session 9 of user core. May 15 16:28:05.277649 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 16:28:05.350619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:28:05.366131 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 16:28:05.481145 kubelet[1781]: E0515 16:28:05.481067 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 16:28:05.485119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 16:28:05.485655 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 16:28:05.486858 systemd[1]: kubelet.service: Consumed 352ms CPU time, 93M memory peak. May 15 16:28:05.693424 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 16:28:05.694044 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 16:28:05.717746 sudo[1788]: pam_unix(sudo:session): session closed for user root May 15 16:28:05.930364 sshd[1775]: Connection closed by 172.24.4.1 port 41210 May 15 16:28:05.930762 sshd-session[1770]: pam_unix(sshd:session): session closed for user core May 15 16:28:05.947120 systemd[1]: sshd@6-172.24.4.121:22-172.24.4.1:41210.service: Deactivated successfully. May 15 16:28:05.950834 systemd[1]: session-9.scope: Deactivated successfully. May 15 16:28:05.953026 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. May 15 16:28:05.959455 systemd[1]: Started sshd@7-172.24.4.121:22-172.24.4.1:41216.service - OpenSSH per-connection server daemon (172.24.4.1:41216). May 15 16:28:05.962222 systemd-logind[1497]: Removed session 9. May 15 16:28:07.259570 sshd[1794]: Accepted publickey for core from 172.24.4.1 port 41216 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:28:07.262543 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:28:07.274421 systemd-logind[1497]: New session 10 of user core. May 15 16:28:07.281640 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 16:28:07.824701 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 16:28:07.826477 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 16:28:07.842591 sudo[1798]: pam_unix(sudo:session): session closed for user root May 15 16:28:07.855969 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 16:28:07.856697 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 16:28:07.883196 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 16:28:08.014946 augenrules[1820]: No rules May 15 16:28:08.018198 systemd[1]: audit-rules.service: Deactivated successfully. May 15 16:28:08.019507 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 16:28:08.022944 sudo[1797]: pam_unix(sudo:session): session closed for user root May 15 16:28:08.275175 sshd[1796]: Connection closed by 172.24.4.1 port 41216 May 15 16:28:08.279101 sshd-session[1794]: pam_unix(sshd:session): session closed for user core May 15 16:28:08.299247 systemd[1]: sshd@7-172.24.4.121:22-172.24.4.1:41216.service: Deactivated successfully. May 15 16:28:08.304127 systemd[1]: session-10.scope: Deactivated successfully. May 15 16:28:08.307294 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. May 15 16:28:08.315929 systemd[1]: Started sshd@8-172.24.4.121:22-172.24.4.1:41232.service - OpenSSH per-connection server daemon (172.24.4.1:41232). May 15 16:28:08.319281 systemd-logind[1497]: Removed session 10. May 15 16:28:09.706551 sshd[1829]: Accepted publickey for core from 172.24.4.1 port 41232 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:28:09.709973 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:28:09.725440 systemd-logind[1497]: New session 11 of user core. May 15 16:28:09.733664 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 16:28:10.265859 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 16:28:10.267472 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 16:28:11.838095 systemd-timesyncd[1422]: Contacted time server 173.71.68.71:123 (2.flatcar.pool.ntp.org). May 15 16:28:11.838276 systemd-timesyncd[1422]: Initial clock synchronization to Thu 2025-05-15 16:28:11.837150 UTC. May 15 16:28:11.841420 systemd-resolved[1401]: Clock change detected. Flushing caches. May 15 16:28:12.323772 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 16:28:12.353324 (dockerd)[1849]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 16:28:12.969270 dockerd[1849]: time="2025-05-15T16:28:12.969108682Z" level=info msg="Starting up" May 15 16:28:12.972091 dockerd[1849]: time="2025-05-15T16:28:12.970906193Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 16:28:13.021878 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3035780964-merged.mount: Deactivated successfully. May 15 16:28:13.069652 dockerd[1849]: time="2025-05-15T16:28:13.069550120Z" level=info msg="Loading containers: start." May 15 16:28:13.089378 kernel: Initializing XFRM netlink socket May 15 16:28:13.541236 systemd-networkd[1454]: docker0: Link UP May 15 16:28:13.549795 dockerd[1849]: time="2025-05-15T16:28:13.549704699Z" level=info msg="Loading containers: done." May 15 16:28:13.581114 dockerd[1849]: time="2025-05-15T16:28:13.581048926Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 16:28:13.581609 dockerd[1849]: time="2025-05-15T16:28:13.581478291Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 15 16:28:13.582089 dockerd[1849]: time="2025-05-15T16:28:13.582008375Z" level=info msg="Initializing buildkit" May 15 16:28:13.635058 dockerd[1849]: time="2025-05-15T16:28:13.634948254Z" level=info msg="Completed buildkit initialization" May 15 16:28:13.655643 dockerd[1849]: time="2025-05-15T16:28:13.655478228Z" level=info msg="Daemon has completed initialization" May 15 16:28:13.655952 dockerd[1849]: time="2025-05-15T16:28:13.655667954Z" level=info msg="API listen on /run/docker.sock" May 15 16:28:13.656205 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 16:28:15.318223 containerd[1540]: time="2025-05-15T16:28:15.318072743Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 16:28:16.171185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2020866963.mount: Deactivated successfully. May 15 16:28:16.566618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 16:28:16.572142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:28:16.977895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:28:16.990191 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 16:28:17.077889 kubelet[2086]: E0515 16:28:17.076966 2086 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 16:28:17.079565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 16:28:17.079724 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 16:28:17.080393 systemd[1]: kubelet.service: Consumed 203ms CPU time, 96M memory peak. May 15 16:28:18.095159 containerd[1540]: time="2025-05-15T16:28:18.094696530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:18.098348 containerd[1540]: time="2025-05-15T16:28:18.098159175Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" May 15 16:28:18.099954 containerd[1540]: time="2025-05-15T16:28:18.099897685Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:18.104966 containerd[1540]: time="2025-05-15T16:28:18.104929583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:18.107641 containerd[1540]: time="2025-05-15T16:28:18.107242791Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.787573092s" May 15 16:28:18.107641 containerd[1540]: time="2025-05-15T16:28:18.107447765Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 15 16:28:18.117176 containerd[1540]: time="2025-05-15T16:28:18.117018295Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 16:28:20.178078 containerd[1540]: time="2025-05-15T16:28:20.176701049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:20.178078 containerd[1540]: time="2025-05-15T16:28:20.177975540Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" May 15 16:28:20.179641 containerd[1540]: time="2025-05-15T16:28:20.179594967Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:20.183344 containerd[1540]: time="2025-05-15T16:28:20.183305446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:20.186797 containerd[1540]: time="2025-05-15T16:28:20.186762249Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.06937361s" May 15 16:28:20.186930 containerd[1540]: time="2025-05-15T16:28:20.186910147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 15 16:28:20.189404 containerd[1540]: time="2025-05-15T16:28:20.189222002Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 16:28:22.136553 containerd[1540]: time="2025-05-15T16:28:22.135988813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:22.141974 containerd[1540]: time="2025-05-15T16:28:22.138687875Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" May 15 16:28:22.141974 containerd[1540]: time="2025-05-15T16:28:22.139957056Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:22.144445 containerd[1540]: time="2025-05-15T16:28:22.144402273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:22.145940 containerd[1540]: time="2025-05-15T16:28:22.145043796Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.955790725s" May 15 16:28:22.146021 containerd[1540]: time="2025-05-15T16:28:22.145961146Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 15 16:28:22.148545 containerd[1540]: time="2025-05-15T16:28:22.147302813Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 16:28:23.541771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130415161.mount: Deactivated successfully. May 15 16:28:24.133046 containerd[1540]: time="2025-05-15T16:28:24.132969680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:24.134485 containerd[1540]: time="2025-05-15T16:28:24.134429147Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" May 15 16:28:24.136222 containerd[1540]: time="2025-05-15T16:28:24.136151127Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:24.139346 containerd[1540]: time="2025-05-15T16:28:24.139291076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:24.140186 containerd[1540]: time="2025-05-15T16:28:24.140094302Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 1.99196508s" May 15 16:28:24.140186 containerd[1540]: time="2025-05-15T16:28:24.140133376Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 15 16:28:24.141641 containerd[1540]: time="2025-05-15T16:28:24.141610266Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 16:28:24.798382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount497632476.mount: Deactivated successfully. May 15 16:28:24.821946 update_engine[1498]: I20250515 16:28:24.821380 1498 update_attempter.cc:509] Updating boot flags... May 15 16:28:26.232104 containerd[1540]: time="2025-05-15T16:28:26.232041589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:26.233791 containerd[1540]: time="2025-05-15T16:28:26.233540009Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 15 16:28:26.234932 containerd[1540]: time="2025-05-15T16:28:26.234895632Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:26.238177 containerd[1540]: time="2025-05-15T16:28:26.238139125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:26.239218 containerd[1540]: time="2025-05-15T16:28:26.239191689Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.097545436s" May 15 16:28:26.239307 containerd[1540]: time="2025-05-15T16:28:26.239290845Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 16:28:26.240646 containerd[1540]: time="2025-05-15T16:28:26.240459537Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 16:28:26.846014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount264099745.mount: Deactivated successfully. May 15 16:28:26.857821 containerd[1540]: time="2025-05-15T16:28:26.856824319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 16:28:26.858940 containerd[1540]: time="2025-05-15T16:28:26.858792440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 15 16:28:26.862025 containerd[1540]: time="2025-05-15T16:28:26.861965361Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 16:28:26.868736 containerd[1540]: time="2025-05-15T16:28:26.868618549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 16:28:26.870952 containerd[1540]: time="2025-05-15T16:28:26.870852569Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 630.353548ms" May 15 16:28:26.871226 containerd[1540]: time="2025-05-15T16:28:26.871181836Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 16:28:26.872699 containerd[1540]: time="2025-05-15T16:28:26.872526729Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 16:28:27.317723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 15 16:28:27.324115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:28:27.733108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:28:27.749565 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 16:28:27.842482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913477074.mount: Deactivated successfully. May 15 16:28:27.876896 kubelet[2212]: E0515 16:28:27.876818 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 16:28:27.879712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 16:28:27.880181 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 16:28:27.881713 systemd[1]: kubelet.service: Consumed 344ms CPU time, 95.8M memory peak. May 15 16:28:30.875927 containerd[1540]: time="2025-05-15T16:28:30.875331137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:30.879220 containerd[1540]: time="2025-05-15T16:28:30.878709904Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" May 15 16:28:30.880457 containerd[1540]: time="2025-05-15T16:28:30.880397339Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:30.888248 containerd[1540]: time="2025-05-15T16:28:30.888154768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:30.889935 containerd[1540]: time="2025-05-15T16:28:30.889333990Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.016369159s" May 15 16:28:30.889935 containerd[1540]: time="2025-05-15T16:28:30.889399563Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 15 16:28:34.391099 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:28:34.392160 systemd[1]: kubelet.service: Consumed 344ms CPU time, 95.8M memory peak. May 15 16:28:34.402744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:28:34.424887 systemd[1]: Reload requested from client PID 2297 ('systemctl') (unit session-11.scope)... May 15 16:28:34.424952 systemd[1]: Reloading... May 15 16:28:34.533959 zram_generator::config[2341]: No configuration found. May 15 16:28:34.691140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 16:28:34.834241 systemd[1]: Reloading finished in 408 ms. May 15 16:28:34.912139 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 16:28:34.912319 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 16:28:34.912999 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:28:34.917191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:28:35.289759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:28:35.310523 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 16:28:35.386747 kubelet[2407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 16:28:35.386747 kubelet[2407]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 16:28:35.386747 kubelet[2407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 16:28:35.387608 kubelet[2407]: I0515 16:28:35.386895 2407 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 16:28:35.738790 kubelet[2407]: I0515 16:28:35.738723 2407 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 16:28:35.738790 kubelet[2407]: I0515 16:28:35.738752 2407 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 16:28:35.739392 kubelet[2407]: I0515 16:28:35.739318 2407 server.go:929] "Client rotation is on, will bootstrap in background" May 15 16:28:36.266711 kubelet[2407]: I0515 16:28:36.265963 2407 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 16:28:36.266711 kubelet[2407]: E0515 16:28:36.266039 2407 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.121:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.121:6443: connect: connection refused" logger="UnhandledError" May 15 16:28:36.291280 kubelet[2407]: I0515 16:28:36.291179 2407 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 16:28:36.306127 kubelet[2407]: I0515 16:28:36.306074 2407 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 16:28:36.310934 kubelet[2407]: I0515 16:28:36.309741 2407 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 16:28:36.310934 kubelet[2407]: I0515 16:28:36.310301 2407 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 16:28:36.310934 kubelet[2407]: I0515 16:28:36.310374 2407 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334-0-0-a-855fb07f2a.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 16:28:36.312571 kubelet[2407]: I0515 16:28:36.312528 2407 topology_manager.go:138] "Creating topology manager with none policy" May 15 16:28:36.313548 kubelet[2407]: I0515 16:28:36.312716 2407 container_manager_linux.go:300] "Creating device plugin manager" May 15 16:28:36.313548 kubelet[2407]: I0515 16:28:36.313126 2407 state_mem.go:36] "Initialized new in-memory state store" May 15 16:28:36.319058 kubelet[2407]: I0515 16:28:36.319016 2407 kubelet.go:408] "Attempting to sync node with API server" May 15 16:28:36.319374 kubelet[2407]: I0515 16:28:36.319314 2407 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 16:28:36.319806 kubelet[2407]: I0515 16:28:36.319772 2407 kubelet.go:314] "Adding apiserver pod source" May 15 16:28:36.320166 kubelet[2407]: I0515 16:28:36.320129 2407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 16:28:36.333158 kubelet[2407]: W0515 16:28:36.332513 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334-0-0-a-855fb07f2a.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.121:6443: connect: connection refused May 15 16:28:36.333158 kubelet[2407]: E0515 16:28:36.332742 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334-0-0-a-855fb07f2a.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.121:6443: connect: connection refused" logger="UnhandledError" May 15 16:28:36.336915 kubelet[2407]: W0515 16:28:36.336472 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.121:6443: connect: connection refused May 15 16:28:36.336915 kubelet[2407]: E0515 16:28:36.336586 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.121:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.121:6443: connect: connection refused" logger="UnhandledError" May 15 16:28:36.337223 kubelet[2407]: I0515 16:28:36.337058 2407 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 16:28:36.342263 kubelet[2407]: I0515 16:28:36.342202 2407 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 16:28:36.344601 kubelet[2407]: W0515 16:28:36.344004 2407 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 16:28:36.349074 kubelet[2407]: I0515 16:28:36.349030 2407 server.go:1269] "Started kubelet" May 15 16:28:36.357504 kubelet[2407]: I0515 16:28:36.357459 2407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 16:28:36.362393 kubelet[2407]: I0515 16:28:36.362351 2407 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 16:28:36.363237 kubelet[2407]: I0515 16:28:36.363150 2407 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 16:28:36.363853 kubelet[2407]: I0515 16:28:36.363815 2407 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 16:28:36.364608 kubelet[2407]: I0515 16:28:36.364564 2407 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 16:28:36.365772 kubelet[2407]: I0515 16:28:36.365232 2407 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 16:28:36.365772 kubelet[2407]: E0515 16:28:36.365585 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334-0-0-a-855fb07f2a.novalocal\" not found" May 15 16:28:36.367549 kubelet[2407]: I0515 16:28:36.367529 2407 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 16:28:36.367791 kubelet[2407]: I0515 16:28:36.367777 2407 reconciler.go:26] "Reconciler: start to sync state" May 15 16:28:36.371017 kubelet[2407]: W0515 16:28:36.370689 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.121:6443: connect: connection refused May 15 16:28:36.371017 kubelet[2407]: E0515 16:28:36.370744 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.121:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.121:6443: connect: connection refused" logger="UnhandledError" May 15 16:28:36.371017 kubelet[2407]: E0515 16:28:36.370811 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334-0-0-a-855fb07f2a.novalocal?timeout=10s\": dial tcp 172.24.4.121:6443: connect: connection refused" interval="200ms" May 15 16:28:36.375145 kubelet[2407]: E0515 16:28:36.370898 2407 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.121:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.121:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4334-0-0-a-855fb07f2a.novalocal.183fc035799961d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4334-0-0-a-855fb07f2a.novalocal,UID:ci-4334-0-0-a-855fb07f2a.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4334-0-0-a-855fb07f2a.novalocal,},FirstTimestamp:2025-05-15 16:28:36.348936663 +0000 UTC m=+1.026345411,LastTimestamp:2025-05-15 16:28:36.348936663 +0000 UTC m=+1.026345411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4334-0-0-a-855fb07f2a.novalocal,}" May 15 16:28:36.375662 kubelet[2407]: I0515 16:28:36.375641 2407 factory.go:221] Registration of the systemd container factory successfully May 15 16:28:36.375833 kubelet[2407]: I0515 16:28:36.375813 2407 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 16:28:36.377098 kubelet[2407]: I0515 16:28:36.377026 2407 server.go:460] "Adding debug handlers to kubelet server" May 15 16:28:36.381100 kubelet[2407]: I0515 16:28:36.381056 2407 factory.go:221] Registration of the containerd container factory successfully May 15 16:28:36.397310 kubelet[2407]: I0515 16:28:36.396251 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 16:28:36.397616 kubelet[2407]: I0515 16:28:36.397408 2407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 16:28:36.397616 kubelet[2407]: I0515 16:28:36.397512 2407 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 16:28:36.397616 kubelet[2407]: I0515 16:28:36.397545 2407 kubelet.go:2321] "Starting kubelet main sync loop" May 15 16:28:36.397616 kubelet[2407]: E0515 16:28:36.397602 2407 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 16:28:36.403845 kubelet[2407]: W0515 16:28:36.403797 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.121:6443: connect: connection refused May 15 16:28:36.403845 kubelet[2407]: E0515 16:28:36.403847 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.121:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.121:6443: connect: connection refused" logger="UnhandledError" May 15 16:28:36.404535 kubelet[2407]: E0515 16:28:36.404460 2407 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 16:28:36.414995 kubelet[2407]: I0515 16:28:36.414964 2407 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 16:28:36.414995 kubelet[2407]: I0515 16:28:36.414985 2407 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 16:28:36.415131 kubelet[2407]: I0515 16:28:36.415023 2407 state_mem.go:36] "Initialized new in-memory state store" May 15 16:28:36.420149 kubelet[2407]: I0515 16:28:36.420123 2407 policy_none.go:49] "None policy: Start" May 15 16:28:36.421320 kubelet[2407]: I0515 16:28:36.421012 2407 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 16:28:36.421320 kubelet[2407]: I0515 16:28:36.421032 2407 state_mem.go:35] "Initializing new in-memory state store" May 15 16:28:36.433238 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 16:28:36.447035 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 16:28:36.452677 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 16:28:36.465750 kubelet[2407]: E0515 16:28:36.465729 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334-0-0-a-855fb07f2a.novalocal\" not found" May 15 16:28:36.466313 kubelet[2407]: I0515 16:28:36.466275 2407 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 16:28:36.466665 kubelet[2407]: I0515 16:28:36.466639 2407 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 16:28:36.466895 kubelet[2407]: I0515 16:28:36.466725 2407 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 16:28:36.467688 kubelet[2407]: I0515 16:28:36.467637 2407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 16:28:36.471416 kubelet[2407]: E0515 16:28:36.471310 2407 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4334-0-0-a-855fb07f2a.novalocal\" not found" May 15 16:28:36.534473 systemd[1]: Created slice kubepods-burstable-podf805d3aaa5774fbf25211376c16f0d6e.slice - libcontainer container kubepods-burstable-podf805d3aaa5774fbf25211376c16f0d6e.slice. May 15 16:28:36.564743 systemd[1]: Created slice kubepods-burstable-pod0f2f0e626d037b0d89b457711ff4fdf5.slice - libcontainer container kubepods-burstable-pod0f2f0e626d037b0d89b457711ff4fdf5.slice. May 15 16:28:36.572966 kubelet[2407]: E0515 16:28:36.572158 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334-0-0-a-855fb07f2a.novalocal?timeout=10s\": dial tcp 172.24.4.121:6443: connect: connection refused" interval="400ms" May 15 16:28:36.574502 kubelet[2407]: I0515 16:28:36.573169 2407 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.575918 kubelet[2407]: E0515 16:28:36.575771 2407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.121:6443/api/v1/nodes\": dial tcp 172.24.4.121:6443: connect: connection refused" node="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.580542 systemd[1]: Created slice kubepods-burstable-pod76c87c9e9d678e9b3a89e7419fcd7c03.slice - libcontainer container kubepods-burstable-pod76c87c9e9d678e9b3a89e7419fcd7c03.slice. May 15 16:28:36.670484 kubelet[2407]: I0515 16:28:36.670370 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f805d3aaa5774fbf25211376c16f0d6e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"f805d3aaa5774fbf25211376c16f0d6e\") " pod="kube-system/kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.670484 kubelet[2407]: I0515 16:28:36.670464 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f2f0e626d037b0d89b457711ff4fdf5-ca-certs\") pod \"kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"0f2f0e626d037b0d89b457711ff4fdf5\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.670778 kubelet[2407]: I0515 16:28:36.670514 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f2f0e626d037b0d89b457711ff4fdf5-flexvolume-dir\") pod \"kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"0f2f0e626d037b0d89b457711ff4fdf5\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.670778 kubelet[2407]: I0515 16:28:36.670555 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f2f0e626d037b0d89b457711ff4fdf5-k8s-certs\") pod \"kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"0f2f0e626d037b0d89b457711ff4fdf5\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.670778 kubelet[2407]: I0515 16:28:36.670599 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f2f0e626d037b0d89b457711ff4fdf5-kubeconfig\") pod \"kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"0f2f0e626d037b0d89b457711ff4fdf5\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.670778 kubelet[2407]: I0515 16:28:36.670641 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f2f0e626d037b0d89b457711ff4fdf5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"0f2f0e626d037b0d89b457711ff4fdf5\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.671139 kubelet[2407]: I0515 16:28:36.670686 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f805d3aaa5774fbf25211376c16f0d6e-ca-certs\") pod \"kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"f805d3aaa5774fbf25211376c16f0d6e\") " pod="kube-system/kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.671139 kubelet[2407]: I0515 16:28:36.670728 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/76c87c9e9d678e9b3a89e7419fcd7c03-kubeconfig\") pod \"kube-scheduler-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"76c87c9e9d678e9b3a89e7419fcd7c03\") " pod="kube-system/kube-scheduler-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.671139 kubelet[2407]: I0515 16:28:36.670771 2407 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f805d3aaa5774fbf25211376c16f0d6e-k8s-certs\") pod \"kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"f805d3aaa5774fbf25211376c16f0d6e\") " pod="kube-system/kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.779431 kubelet[2407]: I0515 16:28:36.779268 2407 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.780540 kubelet[2407]: E0515 16:28:36.780482 2407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.121:6443/api/v1/nodes\": dial tcp 172.24.4.121:6443: connect: connection refused" node="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:36.864396 containerd[1540]: time="2025-05-15T16:28:36.863678051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal,Uid:f805d3aaa5774fbf25211376c16f0d6e,Namespace:kube-system,Attempt:0,}" May 15 16:28:36.881595 containerd[1540]: time="2025-05-15T16:28:36.881402673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal,Uid:0f2f0e626d037b0d89b457711ff4fdf5,Namespace:kube-system,Attempt:0,}" May 15 16:28:36.887550 containerd[1540]: time="2025-05-15T16:28:36.887456727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334-0-0-a-855fb07f2a.novalocal,Uid:76c87c9e9d678e9b3a89e7419fcd7c03,Namespace:kube-system,Attempt:0,}" May 15 16:28:36.973032 containerd[1540]: time="2025-05-15T16:28:36.972167953Z" level=info msg="connecting to shim 4a152f4a384816839894b83ac950e3b410d72337f826a3a7025060c5bcfb6c64" address="unix:///run/containerd/s/731189a2efb8f7f479fb3dfdac420b77783705117f1d08ae4e20792b483a5dbf" namespace=k8s.io protocol=ttrpc version=3 May 15 16:28:36.978576 kubelet[2407]: E0515 16:28:36.978278 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.121:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4334-0-0-a-855fb07f2a.novalocal?timeout=10s\": dial tcp 172.24.4.121:6443: connect: connection refused" interval="800ms" May 15 16:28:36.993298 containerd[1540]: time="2025-05-15T16:28:36.993246045Z" level=info msg="connecting to shim 9eb886cf0ee6f03ab21915d5edbea404d5f3dc0a9fc9b30753374985c43a1d6f" address="unix:///run/containerd/s/3d771569b5027b1b3cf7a9bc4a909e2da9fbe2a633756ec397d7a5fda00a7c8b" namespace=k8s.io protocol=ttrpc version=3 May 15 16:28:37.008503 containerd[1540]: time="2025-05-15T16:28:37.007277692Z" level=info msg="connecting to shim e3899e43b37ece2dd7a191065501a2e6f4b785a4791e3a97a56937236e4f5658" address="unix:///run/containerd/s/edc72ac04ecd72ce5c5e0bc3c6b4a62a8d90bec0be4d3945c0a93d41e10a65ce" namespace=k8s.io protocol=ttrpc version=3 May 15 16:28:37.028088 systemd[1]: Started cri-containerd-9eb886cf0ee6f03ab21915d5edbea404d5f3dc0a9fc9b30753374985c43a1d6f.scope - libcontainer container 9eb886cf0ee6f03ab21915d5edbea404d5f3dc0a9fc9b30753374985c43a1d6f. May 15 16:28:37.043314 systemd[1]: Started cri-containerd-4a152f4a384816839894b83ac950e3b410d72337f826a3a7025060c5bcfb6c64.scope - libcontainer container 4a152f4a384816839894b83ac950e3b410d72337f826a3a7025060c5bcfb6c64. May 15 16:28:37.056207 systemd[1]: Started cri-containerd-e3899e43b37ece2dd7a191065501a2e6f4b785a4791e3a97a56937236e4f5658.scope - libcontainer container e3899e43b37ece2dd7a191065501a2e6f4b785a4791e3a97a56937236e4f5658. May 15 16:28:37.107734 containerd[1540]: time="2025-05-15T16:28:37.107686528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal,Uid:f805d3aaa5774fbf25211376c16f0d6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a152f4a384816839894b83ac950e3b410d72337f826a3a7025060c5bcfb6c64\"" May 15 16:28:37.114264 containerd[1540]: time="2025-05-15T16:28:37.114231013Z" level=info msg="CreateContainer within sandbox \"4a152f4a384816839894b83ac950e3b410d72337f826a3a7025060c5bcfb6c64\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 16:28:37.140351 containerd[1540]: time="2025-05-15T16:28:37.140212611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal,Uid:0f2f0e626d037b0d89b457711ff4fdf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3899e43b37ece2dd7a191065501a2e6f4b785a4791e3a97a56937236e4f5658\"" May 15 16:28:37.146501 containerd[1540]: time="2025-05-15T16:28:37.146120581Z" level=info msg="CreateContainer within sandbox \"e3899e43b37ece2dd7a191065501a2e6f4b785a4791e3a97a56937236e4f5658\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 16:28:37.150084 containerd[1540]: time="2025-05-15T16:28:37.150036746Z" level=info msg="Container 1c1e0b94033ff727535b70477e34fea60aa0319a51699707f4d80c7ef0e43eaa: CDI devices from CRI Config.CDIDevices: []" May 15 16:28:37.163332 containerd[1540]: time="2025-05-15T16:28:37.163290574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4334-0-0-a-855fb07f2a.novalocal,Uid:76c87c9e9d678e9b3a89e7419fcd7c03,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eb886cf0ee6f03ab21915d5edbea404d5f3dc0a9fc9b30753374985c43a1d6f\"" May 15 16:28:37.166911 containerd[1540]: time="2025-05-15T16:28:37.166597796Z" level=info msg="CreateContainer within sandbox \"9eb886cf0ee6f03ab21915d5edbea404d5f3dc0a9fc9b30753374985c43a1d6f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 16:28:37.179510 containerd[1540]: time="2025-05-15T16:28:37.179477202Z" level=info msg="Container 6dde7add2b5407cc1cc2dccad70b03dc94d3442850302f75505c23e64050db8c: CDI devices from CRI Config.CDIDevices: []" May 15 16:28:37.184750 containerd[1540]: time="2025-05-15T16:28:37.184677575Z" level=info msg="CreateContainer within sandbox \"4a152f4a384816839894b83ac950e3b410d72337f826a3a7025060c5bcfb6c64\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1c1e0b94033ff727535b70477e34fea60aa0319a51699707f4d80c7ef0e43eaa\"" May 15 16:28:37.185769 kubelet[2407]: I0515 16:28:37.185748 2407 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:37.186029 containerd[1540]: time="2025-05-15T16:28:37.185983484Z" level=info msg="StartContainer for \"1c1e0b94033ff727535b70477e34fea60aa0319a51699707f4d80c7ef0e43eaa\"" May 15 16:28:37.186678 kubelet[2407]: E0515 16:28:37.186654 2407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.121:6443/api/v1/nodes\": dial tcp 172.24.4.121:6443: connect: connection refused" node="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:37.190942 containerd[1540]: time="2025-05-15T16:28:37.190839602Z" level=info msg="connecting to shim 1c1e0b94033ff727535b70477e34fea60aa0319a51699707f4d80c7ef0e43eaa" address="unix:///run/containerd/s/731189a2efb8f7f479fb3dfdac420b77783705117f1d08ae4e20792b483a5dbf" protocol=ttrpc version=3 May 15 16:28:37.201912 containerd[1540]: time="2025-05-15T16:28:37.201806090Z" level=info msg="CreateContainer within sandbox \"e3899e43b37ece2dd7a191065501a2e6f4b785a4791e3a97a56937236e4f5658\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6dde7add2b5407cc1cc2dccad70b03dc94d3442850302f75505c23e64050db8c\"" May 15 16:28:37.203235 containerd[1540]: time="2025-05-15T16:28:37.203195355Z" level=info msg="Container b878f401a54e97fa6b11b3ab13186a732b09bac37bf0df076eafde812d63d4e4: CDI devices from CRI Config.CDIDevices: []" May 15 16:28:37.204535 containerd[1540]: time="2025-05-15T16:28:37.204488811Z" level=info msg="StartContainer for \"6dde7add2b5407cc1cc2dccad70b03dc94d3442850302f75505c23e64050db8c\"" May 15 16:28:37.207801 containerd[1540]: time="2025-05-15T16:28:37.207585709Z" level=info msg="connecting to shim 6dde7add2b5407cc1cc2dccad70b03dc94d3442850302f75505c23e64050db8c" address="unix:///run/containerd/s/edc72ac04ecd72ce5c5e0bc3c6b4a62a8d90bec0be4d3945c0a93d41e10a65ce" protocol=ttrpc version=3 May 15 16:28:37.220264 systemd[1]: Started cri-containerd-1c1e0b94033ff727535b70477e34fea60aa0319a51699707f4d80c7ef0e43eaa.scope - libcontainer container 1c1e0b94033ff727535b70477e34fea60aa0319a51699707f4d80c7ef0e43eaa. May 15 16:28:37.221918 containerd[1540]: time="2025-05-15T16:28:37.220496253Z" level=info msg="CreateContainer within sandbox \"9eb886cf0ee6f03ab21915d5edbea404d5f3dc0a9fc9b30753374985c43a1d6f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b878f401a54e97fa6b11b3ab13186a732b09bac37bf0df076eafde812d63d4e4\"" May 15 16:28:37.221918 containerd[1540]: time="2025-05-15T16:28:37.220975562Z" level=info msg="StartContainer for \"b878f401a54e97fa6b11b3ab13186a732b09bac37bf0df076eafde812d63d4e4\"" May 15 16:28:37.222728 containerd[1540]: time="2025-05-15T16:28:37.222670761Z" level=info msg="connecting to shim b878f401a54e97fa6b11b3ab13186a732b09bac37bf0df076eafde812d63d4e4" address="unix:///run/containerd/s/3d771569b5027b1b3cf7a9bc4a909e2da9fbe2a633756ec397d7a5fda00a7c8b" protocol=ttrpc version=3 May 15 16:28:37.245161 systemd[1]: Started cri-containerd-6dde7add2b5407cc1cc2dccad70b03dc94d3442850302f75505c23e64050db8c.scope - libcontainer container 6dde7add2b5407cc1cc2dccad70b03dc94d3442850302f75505c23e64050db8c. May 15 16:28:37.260063 systemd[1]: Started cri-containerd-b878f401a54e97fa6b11b3ab13186a732b09bac37bf0df076eafde812d63d4e4.scope - libcontainer container b878f401a54e97fa6b11b3ab13186a732b09bac37bf0df076eafde812d63d4e4. May 15 16:28:37.326112 containerd[1540]: time="2025-05-15T16:28:37.326020444Z" level=info msg="StartContainer for \"1c1e0b94033ff727535b70477e34fea60aa0319a51699707f4d80c7ef0e43eaa\" returns successfully" May 15 16:28:37.341041 containerd[1540]: time="2025-05-15T16:28:37.340791817Z" level=info msg="StartContainer for \"6dde7add2b5407cc1cc2dccad70b03dc94d3442850302f75505c23e64050db8c\" returns successfully" May 15 16:28:37.351326 kubelet[2407]: W0515 16:28:37.351250 2407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334-0-0-a-855fb07f2a.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.121:6443: connect: connection refused May 15 16:28:37.351552 kubelet[2407]: E0515 16:28:37.351511 2407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.121:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4334-0-0-a-855fb07f2a.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.121:6443: connect: connection refused" logger="UnhandledError" May 15 16:28:37.404310 containerd[1540]: time="2025-05-15T16:28:37.403901259Z" level=info msg="StartContainer for \"b878f401a54e97fa6b11b3ab13186a732b09bac37bf0df076eafde812d63d4e4\" returns successfully" May 15 16:28:37.990909 kubelet[2407]: I0515 16:28:37.989698 2407 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:39.283972 kubelet[2407]: I0515 16:28:39.283916 2407 kubelet_node_status.go:75] "Successfully registered node" node="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:39.284949 kubelet[2407]: E0515 16:28:39.284740 2407 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4334-0-0-a-855fb07f2a.novalocal\": node \"ci-4334-0-0-a-855fb07f2a.novalocal\" not found" May 15 16:28:39.330950 kubelet[2407]: E0515 16:28:39.330913 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334-0-0-a-855fb07f2a.novalocal\" not found" May 15 16:28:39.356401 kubelet[2407]: E0515 16:28:39.356341 2407 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" May 15 16:28:39.431292 kubelet[2407]: E0515 16:28:39.431259 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334-0-0-a-855fb07f2a.novalocal\" not found" May 15 16:28:39.531782 kubelet[2407]: E0515 16:28:39.531719 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334-0-0-a-855fb07f2a.novalocal\" not found" May 15 16:28:39.632270 kubelet[2407]: E0515 16:28:39.632239 2407 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334-0-0-a-855fb07f2a.novalocal\" not found" May 15 16:28:40.338396 kubelet[2407]: I0515 16:28:40.338363 2407 apiserver.go:52] "Watching apiserver" May 15 16:28:40.368518 kubelet[2407]: I0515 16:28:40.368368 2407 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 16:28:41.831285 systemd[1]: Reload requested from client PID 2678 ('systemctl') (unit session-11.scope)... May 15 16:28:41.831390 systemd[1]: Reloading... May 15 16:28:41.968928 zram_generator::config[2726]: No configuration found. May 15 16:28:42.100334 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 16:28:42.257009 systemd[1]: Reloading finished in 424 ms. May 15 16:28:42.292344 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:28:42.315338 systemd[1]: kubelet.service: Deactivated successfully. May 15 16:28:42.315734 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:28:42.315828 systemd[1]: kubelet.service: Consumed 1.191s CPU time, 114.9M memory peak. May 15 16:28:42.318208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 16:28:42.578069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 16:28:42.587662 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 16:28:42.634836 kubelet[2786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 16:28:42.635564 kubelet[2786]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 16:28:42.635564 kubelet[2786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 16:28:42.635564 kubelet[2786]: I0515 16:28:42.635133 2786 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 16:28:42.646926 kubelet[2786]: I0515 16:28:42.645962 2786 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 16:28:42.646926 kubelet[2786]: I0515 16:28:42.646011 2786 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 16:28:42.646926 kubelet[2786]: I0515 16:28:42.646519 2786 server.go:929] "Client rotation is on, will bootstrap in background" May 15 16:28:42.650667 kubelet[2786]: I0515 16:28:42.650625 2786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 16:28:42.656130 kubelet[2786]: I0515 16:28:42.656071 2786 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 16:28:42.664092 kubelet[2786]: I0515 16:28:42.664040 2786 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 16:28:42.667292 kubelet[2786]: I0515 16:28:42.667256 2786 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 16:28:42.667519 kubelet[2786]: I0515 16:28:42.667429 2786 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 16:28:42.667617 kubelet[2786]: I0515 16:28:42.667539 2786 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 16:28:42.667767 kubelet[2786]: I0515 16:28:42.667566 2786 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4334-0-0-a-855fb07f2a.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 16:28:42.667767 kubelet[2786]: I0515 16:28:42.667754 2786 topology_manager.go:138] "Creating topology manager with none policy" May 15 16:28:42.667767 kubelet[2786]: I0515 16:28:42.667765 2786 container_manager_linux.go:300] "Creating device plugin manager" May 15 16:28:42.668301 kubelet[2786]: I0515 16:28:42.667796 2786 state_mem.go:36] "Initialized new in-memory state store" May 15 16:28:42.668301 kubelet[2786]: I0515 16:28:42.667933 2786 kubelet.go:408] "Attempting to sync node with API server" May 15 16:28:42.668301 kubelet[2786]: I0515 16:28:42.667949 2786 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 16:28:42.668595 kubelet[2786]: I0515 16:28:42.668382 2786 kubelet.go:314] "Adding apiserver pod source" May 15 16:28:42.668595 kubelet[2786]: I0515 16:28:42.668409 2786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 16:28:42.671016 kubelet[2786]: I0515 16:28:42.670972 2786 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 15 16:28:42.671396 kubelet[2786]: I0515 16:28:42.671360 2786 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 16:28:42.671779 kubelet[2786]: I0515 16:28:42.671747 2786 server.go:1269] "Started kubelet" May 15 16:28:42.676916 kubelet[2786]: I0515 16:28:42.675739 2786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 16:28:42.690613 kubelet[2786]: I0515 16:28:42.690497 2786 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 16:28:42.692016 kubelet[2786]: I0515 16:28:42.691978 2786 server.go:460] "Adding debug handlers to kubelet server" May 15 16:28:42.703338 kubelet[2786]: I0515 16:28:42.692122 2786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 16:28:42.703338 kubelet[2786]: I0515 16:28:42.703110 2786 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 16:28:42.703338 kubelet[2786]: I0515 16:28:42.695549 2786 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 16:28:42.705116 kubelet[2786]: I0515 16:28:42.694063 2786 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 16:28:42.705246 kubelet[2786]: I0515 16:28:42.695577 2786 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 16:28:42.705408 kubelet[2786]: I0515 16:28:42.705396 2786 reconciler.go:26] "Reconciler: start to sync state" May 15 16:28:42.705510 kubelet[2786]: I0515 16:28:42.705490 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 16:28:42.706531 kubelet[2786]: I0515 16:28:42.706514 2786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 16:28:42.706608 kubelet[2786]: I0515 16:28:42.706598 2786 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 16:28:42.706693 kubelet[2786]: I0515 16:28:42.706683 2786 kubelet.go:2321] "Starting kubelet main sync loop" May 15 16:28:42.706802 kubelet[2786]: E0515 16:28:42.706777 2786 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 16:28:42.707071 kubelet[2786]: E0515 16:28:42.695706 2786 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4334-0-0-a-855fb07f2a.novalocal\" not found" May 15 16:28:42.715438 kubelet[2786]: I0515 16:28:42.715406 2786 factory.go:221] Registration of the systemd container factory successfully May 15 16:28:42.715541 kubelet[2786]: I0515 16:28:42.715513 2786 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 16:28:42.718648 kubelet[2786]: I0515 16:28:42.718606 2786 factory.go:221] Registration of the containerd container factory successfully May 15 16:28:42.726975 kubelet[2786]: E0515 16:28:42.726930 2786 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 16:28:42.774478 kubelet[2786]: I0515 16:28:42.773977 2786 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 16:28:42.774478 kubelet[2786]: I0515 16:28:42.773994 2786 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 16:28:42.774478 kubelet[2786]: I0515 16:28:42.774032 2786 state_mem.go:36] "Initialized new in-memory state store" May 15 16:28:42.774478 kubelet[2786]: I0515 16:28:42.774173 2786 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 16:28:42.774478 kubelet[2786]: I0515 16:28:42.774184 2786 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 16:28:42.774478 kubelet[2786]: I0515 16:28:42.774202 2786 policy_none.go:49] "None policy: Start" May 15 16:28:42.775133 kubelet[2786]: I0515 16:28:42.774942 2786 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 16:28:42.775133 kubelet[2786]: I0515 16:28:42.775003 2786 state_mem.go:35] "Initializing new in-memory state store" May 15 16:28:42.776763 kubelet[2786]: I0515 16:28:42.776721 2786 state_mem.go:75] "Updated machine memory state" May 15 16:28:42.785489 kubelet[2786]: I0515 16:28:42.785465 2786 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 16:28:42.787412 kubelet[2786]: I0515 16:28:42.786989 2786 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 16:28:42.787412 kubelet[2786]: I0515 16:28:42.787007 2786 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 16:28:42.787526 kubelet[2786]: I0515 16:28:42.787499 2786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 16:28:42.825686 kubelet[2786]: W0515 16:28:42.825567 2786 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 16:28:42.831278 kubelet[2786]: W0515 16:28:42.831194 2786 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 16:28:42.831698 kubelet[2786]: W0515 16:28:42.831502 2786 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 16:28:42.894176 kubelet[2786]: I0515 16:28:42.893261 2786 kubelet_node_status.go:72] "Attempting to register node" node="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:42.906426 kubelet[2786]: I0515 16:28:42.906118 2786 kubelet_node_status.go:111] "Node was previously registered" node="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:42.906426 kubelet[2786]: I0515 16:28:42.906198 2786 kubelet_node_status.go:75] "Successfully registered node" node="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:42.908905 kubelet[2786]: I0515 16:28:42.908498 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f2f0e626d037b0d89b457711ff4fdf5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"0f2f0e626d037b0d89b457711ff4fdf5\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:42.909643 kubelet[2786]: I0515 16:28:42.909467 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/76c87c9e9d678e9b3a89e7419fcd7c03-kubeconfig\") pod \"kube-scheduler-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"76c87c9e9d678e9b3a89e7419fcd7c03\") " pod="kube-system/kube-scheduler-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:42.909643 kubelet[2786]: I0515 16:28:42.909542 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f805d3aaa5774fbf25211376c16f0d6e-k8s-certs\") pod \"kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"f805d3aaa5774fbf25211376c16f0d6e\") " pod="kube-system/kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:42.909756 kubelet[2786]: I0515 16:28:42.909650 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f2f0e626d037b0d89b457711ff4fdf5-ca-certs\") pod \"kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"0f2f0e626d037b0d89b457711ff4fdf5\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:42.909756 kubelet[2786]: I0515 16:28:42.909702 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f2f0e626d037b0d89b457711ff4fdf5-k8s-certs\") pod \"kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"0f2f0e626d037b0d89b457711ff4fdf5\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:42.909823 kubelet[2786]: I0515 16:28:42.909748 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f2f0e626d037b0d89b457711ff4fdf5-kubeconfig\") pod \"kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"0f2f0e626d037b0d89b457711ff4fdf5\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:42.909823 kubelet[2786]: I0515 16:28:42.909792 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f805d3aaa5774fbf25211376c16f0d6e-ca-certs\") pod \"kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"f805d3aaa5774fbf25211376c16f0d6e\") " pod="kube-system/kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:42.909899 kubelet[2786]: I0515 16:28:42.909837 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f805d3aaa5774fbf25211376c16f0d6e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"f805d3aaa5774fbf25211376c16f0d6e\") " pod="kube-system/kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:42.909947 kubelet[2786]: I0515 16:28:42.909917 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f2f0e626d037b0d89b457711ff4fdf5-flexvolume-dir\") pod \"kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal\" (UID: \"0f2f0e626d037b0d89b457711ff4fdf5\") " pod="kube-system/kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:43.680879 kubelet[2786]: I0515 16:28:43.679727 2786 apiserver.go:52] "Watching apiserver" May 15 16:28:43.705516 kubelet[2786]: I0515 16:28:43.705446 2786 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 16:28:43.783380 kubelet[2786]: W0515 16:28:43.783298 2786 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 16:28:43.783707 kubelet[2786]: E0515 16:28:43.783583 2786 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4334-0-0-a-855fb07f2a.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:43.786318 kubelet[2786]: W0515 16:28:43.786303 2786 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 16:28:43.786490 kubelet[2786]: E0515 16:28:43.786396 2786 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:28:43.849512 kubelet[2786]: I0515 16:28:43.849437 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4334-0-0-a-855fb07f2a.novalocal" podStartSLOduration=1.849351379 podStartE2EDuration="1.849351379s" podCreationTimestamp="2025-05-15 16:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 16:28:43.822024087 +0000 UTC m=+1.229577774" watchObservedRunningTime="2025-05-15 16:28:43.849351379 +0000 UTC m=+1.256905056" May 15 16:28:43.872380 kubelet[2786]: I0515 16:28:43.872154 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4334-0-0-a-855fb07f2a.novalocal" podStartSLOduration=1.8721373639999999 podStartE2EDuration="1.872137364s" podCreationTimestamp="2025-05-15 16:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 16:28:43.851487335 +0000 UTC m=+1.259041012" watchObservedRunningTime="2025-05-15 16:28:43.872137364 +0000 UTC m=+1.279691041" May 15 16:28:43.872380 kubelet[2786]: I0515 16:28:43.872257 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4334-0-0-a-855fb07f2a.novalocal" podStartSLOduration=1.8722508169999998 podStartE2EDuration="1.872250817s" podCreationTimestamp="2025-05-15 16:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 16:28:43.871435257 +0000 UTC m=+1.278988934" watchObservedRunningTime="2025-05-15 16:28:43.872250817 +0000 UTC m=+1.279804494" May 15 16:28:47.880069 kubelet[2786]: I0515 16:28:47.880010 2786 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 16:28:47.881192 kubelet[2786]: I0515 16:28:47.881029 2786 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 16:28:47.881277 containerd[1540]: time="2025-05-15T16:28:47.880321103Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 16:28:48.530968 sudo[1832]: pam_unix(sudo:session): session closed for user root May 15 16:28:48.719782 sshd[1831]: Connection closed by 172.24.4.1 port 41232 May 15 16:28:48.722104 sshd-session[1829]: pam_unix(sshd:session): session closed for user core May 15 16:28:48.743085 systemd[1]: sshd@8-172.24.4.121:22-172.24.4.1:41232.service: Deactivated successfully. May 15 16:28:48.756550 systemd[1]: session-11.scope: Deactivated successfully. May 15 16:28:48.757681 systemd[1]: session-11.scope: Consumed 6.861s CPU time, 228.7M memory peak. May 15 16:28:48.771631 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. May 15 16:28:48.781342 systemd-logind[1497]: Removed session 11. May 15 16:28:48.815087 systemd[1]: Created slice kubepods-besteffort-pod696ceae2_e20f_40c5_903c_170dd3d361dd.slice - libcontainer container kubepods-besteffort-pod696ceae2_e20f_40c5_903c_170dd3d361dd.slice. May 15 16:28:48.855586 kubelet[2786]: I0515 16:28:48.855529 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/696ceae2-e20f-40c5-903c-170dd3d361dd-xtables-lock\") pod \"kube-proxy-f8zwv\" (UID: \"696ceae2-e20f-40c5-903c-170dd3d361dd\") " pod="kube-system/kube-proxy-f8zwv" May 15 16:28:48.855586 kubelet[2786]: I0515 16:28:48.855618 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/696ceae2-e20f-40c5-903c-170dd3d361dd-lib-modules\") pod \"kube-proxy-f8zwv\" (UID: \"696ceae2-e20f-40c5-903c-170dd3d361dd\") " pod="kube-system/kube-proxy-f8zwv" May 15 16:28:48.855917 kubelet[2786]: I0515 16:28:48.855650 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxkw\" (UniqueName: \"kubernetes.io/projected/696ceae2-e20f-40c5-903c-170dd3d361dd-kube-api-access-fdxkw\") pod \"kube-proxy-f8zwv\" (UID: \"696ceae2-e20f-40c5-903c-170dd3d361dd\") " pod="kube-system/kube-proxy-f8zwv" May 15 16:28:48.855917 kubelet[2786]: I0515 16:28:48.855696 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/696ceae2-e20f-40c5-903c-170dd3d361dd-kube-proxy\") pod \"kube-proxy-f8zwv\" (UID: \"696ceae2-e20f-40c5-903c-170dd3d361dd\") " pod="kube-system/kube-proxy-f8zwv" May 15 16:28:48.882375 systemd[1]: Created slice kubepods-besteffort-pode62e54fb_24e5_4564_a061_6b63fea91121.slice - libcontainer container kubepods-besteffort-pode62e54fb_24e5_4564_a061_6b63fea91121.slice. May 15 16:28:48.956690 kubelet[2786]: I0515 16:28:48.956605 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvdp2\" (UniqueName: \"kubernetes.io/projected/e62e54fb-24e5-4564-a061-6b63fea91121-kube-api-access-dvdp2\") pod \"tigera-operator-6f6897fdc5-zhtvb\" (UID: \"e62e54fb-24e5-4564-a061-6b63fea91121\") " pod="tigera-operator/tigera-operator-6f6897fdc5-zhtvb" May 15 16:28:48.956690 kubelet[2786]: I0515 16:28:48.956691 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e62e54fb-24e5-4564-a061-6b63fea91121-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-zhtvb\" (UID: \"e62e54fb-24e5-4564-a061-6b63fea91121\") " pod="tigera-operator/tigera-operator-6f6897fdc5-zhtvb" May 15 16:28:49.127216 containerd[1540]: time="2025-05-15T16:28:49.126999008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8zwv,Uid:696ceae2-e20f-40c5-903c-170dd3d361dd,Namespace:kube-system,Attempt:0,}" May 15 16:28:49.188461 containerd[1540]: time="2025-05-15T16:28:49.187823096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-zhtvb,Uid:e62e54fb-24e5-4564-a061-6b63fea91121,Namespace:tigera-operator,Attempt:0,}" May 15 16:28:49.192062 containerd[1540]: time="2025-05-15T16:28:49.191971651Z" level=info msg="connecting to shim 81c3514bb564cff936e8a23f16f0a9810cec44d471946b6597fbf40ab6bfcdc7" address="unix:///run/containerd/s/1ce55e51bc4200da7c0f12d13c6a57fc5b5bdcc145106f4243ab78650c8b57c2" namespace=k8s.io protocol=ttrpc version=3 May 15 16:28:49.223429 containerd[1540]: time="2025-05-15T16:28:49.223130525Z" level=info msg="connecting to shim b025f975438de3a11681419e7daf2f5df2903d37464e3c5ed5d39c4515165168" address="unix:///run/containerd/s/c964bd39b36531c5b9ef526aa922ac42b92314cc319300f976ba36a6120f6155" namespace=k8s.io protocol=ttrpc version=3 May 15 16:28:49.245105 systemd[1]: Started cri-containerd-81c3514bb564cff936e8a23f16f0a9810cec44d471946b6597fbf40ab6bfcdc7.scope - libcontainer container 81c3514bb564cff936e8a23f16f0a9810cec44d471946b6597fbf40ab6bfcdc7. May 15 16:28:49.250580 systemd[1]: Started cri-containerd-b025f975438de3a11681419e7daf2f5df2903d37464e3c5ed5d39c4515165168.scope - libcontainer container b025f975438de3a11681419e7daf2f5df2903d37464e3c5ed5d39c4515165168. May 15 16:28:49.291904 containerd[1540]: time="2025-05-15T16:28:49.291805615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8zwv,Uid:696ceae2-e20f-40c5-903c-170dd3d361dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"81c3514bb564cff936e8a23f16f0a9810cec44d471946b6597fbf40ab6bfcdc7\"" May 15 16:28:49.298968 containerd[1540]: time="2025-05-15T16:28:49.298854427Z" level=info msg="CreateContainer within sandbox \"81c3514bb564cff936e8a23f16f0a9810cec44d471946b6597fbf40ab6bfcdc7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 16:28:49.318056 containerd[1540]: time="2025-05-15T16:28:49.318003636Z" level=info msg="Container 9c866e3cfbd414415d1d13bc7a86ea6455846a77d5ad9b1741a6e76e0e7f94f4: CDI devices from CRI Config.CDIDevices: []" May 15 16:28:49.332425 containerd[1540]: time="2025-05-15T16:28:49.332315603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-zhtvb,Uid:e62e54fb-24e5-4564-a061-6b63fea91121,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b025f975438de3a11681419e7daf2f5df2903d37464e3c5ed5d39c4515165168\"" May 15 16:28:49.334010 containerd[1540]: time="2025-05-15T16:28:49.333944798Z" level=info msg="CreateContainer within sandbox \"81c3514bb564cff936e8a23f16f0a9810cec44d471946b6597fbf40ab6bfcdc7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c866e3cfbd414415d1d13bc7a86ea6455846a77d5ad9b1741a6e76e0e7f94f4\"" May 15 16:28:49.335263 containerd[1540]: time="2025-05-15T16:28:49.335077239Z" level=info msg="StartContainer for \"9c866e3cfbd414415d1d13bc7a86ea6455846a77d5ad9b1741a6e76e0e7f94f4\"" May 15 16:28:49.336902 containerd[1540]: time="2025-05-15T16:28:49.336635050Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 16:28:49.337198 containerd[1540]: time="2025-05-15T16:28:49.337157412Z" level=info msg="connecting to shim 9c866e3cfbd414415d1d13bc7a86ea6455846a77d5ad9b1741a6e76e0e7f94f4" address="unix:///run/containerd/s/1ce55e51bc4200da7c0f12d13c6a57fc5b5bdcc145106f4243ab78650c8b57c2" protocol=ttrpc version=3 May 15 16:28:49.368208 systemd[1]: Started cri-containerd-9c866e3cfbd414415d1d13bc7a86ea6455846a77d5ad9b1741a6e76e0e7f94f4.scope - libcontainer container 9c866e3cfbd414415d1d13bc7a86ea6455846a77d5ad9b1741a6e76e0e7f94f4. May 15 16:28:49.424947 containerd[1540]: time="2025-05-15T16:28:49.424820608Z" level=info msg="StartContainer for \"9c866e3cfbd414415d1d13bc7a86ea6455846a77d5ad9b1741a6e76e0e7f94f4\" returns successfully" May 15 16:28:51.090202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292271044.mount: Deactivated successfully. May 15 16:28:52.417631 containerd[1540]: time="2025-05-15T16:28:52.417524546Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:52.419995 containerd[1540]: time="2025-05-15T16:28:52.419334801Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 15 16:28:52.422733 containerd[1540]: time="2025-05-15T16:28:52.421134495Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:52.427007 containerd[1540]: time="2025-05-15T16:28:52.426941455Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:52.428772 containerd[1540]: time="2025-05-15T16:28:52.428696715Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.092013465s" May 15 16:28:52.428977 containerd[1540]: time="2025-05-15T16:28:52.428773089Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 16:28:52.438465 containerd[1540]: time="2025-05-15T16:28:52.438380174Z" level=info msg="CreateContainer within sandbox \"b025f975438de3a11681419e7daf2f5df2903d37464e3c5ed5d39c4515165168\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 16:28:52.464944 containerd[1540]: time="2025-05-15T16:28:52.463294714Z" level=info msg="Container 20f59cb2f72a89fb8b92fbf193f6a1790d26a1505a54fbc414ed56bfdfa7113c: CDI devices from CRI Config.CDIDevices: []" May 15 16:28:52.478681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188290890.mount: Deactivated successfully. May 15 16:28:52.489307 containerd[1540]: time="2025-05-15T16:28:52.489238769Z" level=info msg="CreateContainer within sandbox \"b025f975438de3a11681419e7daf2f5df2903d37464e3c5ed5d39c4515165168\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"20f59cb2f72a89fb8b92fbf193f6a1790d26a1505a54fbc414ed56bfdfa7113c\"" May 15 16:28:52.492257 containerd[1540]: time="2025-05-15T16:28:52.492187143Z" level=info msg="StartContainer for \"20f59cb2f72a89fb8b92fbf193f6a1790d26a1505a54fbc414ed56bfdfa7113c\"" May 15 16:28:52.495711 containerd[1540]: time="2025-05-15T16:28:52.495615651Z" level=info msg="connecting to shim 20f59cb2f72a89fb8b92fbf193f6a1790d26a1505a54fbc414ed56bfdfa7113c" address="unix:///run/containerd/s/c964bd39b36531c5b9ef526aa922ac42b92314cc319300f976ba36a6120f6155" protocol=ttrpc version=3 May 15 16:28:52.534085 systemd[1]: Started cri-containerd-20f59cb2f72a89fb8b92fbf193f6a1790d26a1505a54fbc414ed56bfdfa7113c.scope - libcontainer container 20f59cb2f72a89fb8b92fbf193f6a1790d26a1505a54fbc414ed56bfdfa7113c. May 15 16:28:52.577993 containerd[1540]: time="2025-05-15T16:28:52.577946526Z" level=info msg="StartContainer for \"20f59cb2f72a89fb8b92fbf193f6a1790d26a1505a54fbc414ed56bfdfa7113c\" returns successfully" May 15 16:28:52.841794 kubelet[2786]: I0515 16:28:52.841427 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f8zwv" podStartSLOduration=4.841337982 podStartE2EDuration="4.841337982s" podCreationTimestamp="2025-05-15 16:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 16:28:49.86427281 +0000 UTC m=+7.271826497" watchObservedRunningTime="2025-05-15 16:28:52.841337982 +0000 UTC m=+10.248891709" May 15 16:28:52.847922 kubelet[2786]: I0515 16:28:52.843396 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-zhtvb" podStartSLOduration=1.747444592 podStartE2EDuration="4.842959421s" podCreationTimestamp="2025-05-15 16:28:48 +0000 UTC" firstStartedPulling="2025-05-15 16:28:49.336142393 +0000 UTC m=+6.743696080" lastFinishedPulling="2025-05-15 16:28:52.431657182 +0000 UTC m=+9.839210909" observedRunningTime="2025-05-15 16:28:52.841146081 +0000 UTC m=+10.248699808" watchObservedRunningTime="2025-05-15 16:28:52.842959421 +0000 UTC m=+10.250513148" May 15 16:28:55.949575 systemd[1]: Created slice kubepods-besteffort-pod9ae5563f_b345_4cab_8ec8_742caeb4987f.slice - libcontainer container kubepods-besteffort-pod9ae5563f_b345_4cab_8ec8_742caeb4987f.slice. May 15 16:28:56.008207 kubelet[2786]: I0515 16:28:56.008113 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ae5563f-b345-4cab-8ec8-742caeb4987f-tigera-ca-bundle\") pod \"calico-typha-86c98d4c89-vz8r2\" (UID: \"9ae5563f-b345-4cab-8ec8-742caeb4987f\") " pod="calico-system/calico-typha-86c98d4c89-vz8r2" May 15 16:28:56.008207 kubelet[2786]: I0515 16:28:56.008213 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gdfb\" (UniqueName: \"kubernetes.io/projected/9ae5563f-b345-4cab-8ec8-742caeb4987f-kube-api-access-8gdfb\") pod \"calico-typha-86c98d4c89-vz8r2\" (UID: \"9ae5563f-b345-4cab-8ec8-742caeb4987f\") " pod="calico-system/calico-typha-86c98d4c89-vz8r2" May 15 16:28:56.008665 kubelet[2786]: I0515 16:28:56.008237 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9ae5563f-b345-4cab-8ec8-742caeb4987f-typha-certs\") pod \"calico-typha-86c98d4c89-vz8r2\" (UID: \"9ae5563f-b345-4cab-8ec8-742caeb4987f\") " pod="calico-system/calico-typha-86c98d4c89-vz8r2" May 15 16:28:56.068515 systemd[1]: Created slice kubepods-besteffort-pod8ae61cac_23ac_4e0b_844e_4422aae3b5cf.slice - libcontainer container kubepods-besteffort-pod8ae61cac_23ac_4e0b_844e_4422aae3b5cf.slice. May 15 16:28:56.108794 kubelet[2786]: I0515 16:28:56.108736 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-cni-net-dir\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.108794 kubelet[2786]: I0515 16:28:56.108798 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-node-certs\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.109039 kubelet[2786]: I0515 16:28:56.108821 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-var-lib-calico\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.109039 kubelet[2786]: I0515 16:28:56.108919 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-policysync\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.109039 kubelet[2786]: I0515 16:28:56.108964 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-tigera-ca-bundle\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.109039 kubelet[2786]: I0515 16:28:56.108988 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qc7z\" (UniqueName: \"kubernetes.io/projected/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-kube-api-access-5qc7z\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.109039 kubelet[2786]: I0515 16:28:56.109009 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-lib-modules\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.109184 kubelet[2786]: I0515 16:28:56.109052 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-flexvol-driver-host\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.109184 kubelet[2786]: I0515 16:28:56.109073 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-cni-log-dir\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.109184 kubelet[2786]: I0515 16:28:56.109090 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-xtables-lock\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.109184 kubelet[2786]: I0515 16:28:56.109124 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-var-run-calico\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.109184 kubelet[2786]: I0515 16:28:56.109146 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8ae61cac-23ac-4e0b-844e-4422aae3b5cf-cni-bin-dir\") pod \"calico-node-wlbtn\" (UID: \"8ae61cac-23ac-4e0b-844e-4422aae3b5cf\") " pod="calico-system/calico-node-wlbtn" May 15 16:28:56.189067 kubelet[2786]: E0515 16:28:56.188984 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nr2s" podUID="959f2504-6d0c-476e-a05f-9ba36f5d930f" May 15 16:28:56.230711 kubelet[2786]: E0515 16:28:56.228851 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.230711 kubelet[2786]: W0515 16:28:56.230073 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.230711 kubelet[2786]: E0515 16:28:56.230158 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.257673 containerd[1540]: time="2025-05-15T16:28:56.256613979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86c98d4c89-vz8r2,Uid:9ae5563f-b345-4cab-8ec8-742caeb4987f,Namespace:calico-system,Attempt:0,}" May 15 16:28:56.262402 kubelet[2786]: E0515 16:28:56.262369 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.262516 kubelet[2786]: W0515 16:28:56.262394 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.262516 kubelet[2786]: E0515 16:28:56.262452 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.290855 kubelet[2786]: E0515 16:28:56.290806 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.291109 kubelet[2786]: W0515 16:28:56.291072 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.291109 kubelet[2786]: E0515 16:28:56.291104 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.294717 kubelet[2786]: E0515 16:28:56.294667 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.294827 kubelet[2786]: W0515 16:28:56.294728 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.294827 kubelet[2786]: E0515 16:28:56.294753 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.295635 kubelet[2786]: E0515 16:28:56.295608 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.295635 kubelet[2786]: W0515 16:28:56.295625 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.295741 kubelet[2786]: E0515 16:28:56.295638 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.297053 kubelet[2786]: E0515 16:28:56.297020 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.297146 kubelet[2786]: W0515 16:28:56.297124 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.297203 kubelet[2786]: E0515 16:28:56.297185 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.297638 kubelet[2786]: E0515 16:28:56.297614 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.297638 kubelet[2786]: W0515 16:28:56.297632 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.297726 kubelet[2786]: E0515 16:28:56.297645 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.298523 kubelet[2786]: E0515 16:28:56.298276 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.298523 kubelet[2786]: W0515 16:28:56.298518 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.298624 kubelet[2786]: E0515 16:28:56.298533 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.299736 kubelet[2786]: E0515 16:28:56.299707 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.299736 kubelet[2786]: W0515 16:28:56.299725 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.299822 kubelet[2786]: E0515 16:28:56.299738 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.307991 kubelet[2786]: E0515 16:28:56.305719 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.307991 kubelet[2786]: W0515 16:28:56.305752 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.307991 kubelet[2786]: E0515 16:28:56.305772 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.310654 kubelet[2786]: E0515 16:28:56.310050 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.310813 kubelet[2786]: W0515 16:28:56.310796 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.310963 kubelet[2786]: E0515 16:28:56.310947 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.312959 kubelet[2786]: E0515 16:28:56.312136 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.312959 kubelet[2786]: W0515 16:28:56.312154 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.312959 kubelet[2786]: E0515 16:28:56.312169 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.316921 kubelet[2786]: E0515 16:28:56.316891 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.317594 kubelet[2786]: W0515 16:28:56.317303 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.318184 kubelet[2786]: E0515 16:28:56.317327 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.319093 kubelet[2786]: E0515 16:28:56.319029 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.319093 kubelet[2786]: W0515 16:28:56.319042 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.319093 kubelet[2786]: E0515 16:28:56.319056 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.321384 kubelet[2786]: E0515 16:28:56.321351 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.321384 kubelet[2786]: W0515 16:28:56.321379 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.321548 kubelet[2786]: E0515 16:28:56.321403 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.321945 kubelet[2786]: E0515 16:28:56.321740 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.321945 kubelet[2786]: W0515 16:28:56.321754 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.321945 kubelet[2786]: E0515 16:28:56.321766 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.322082 kubelet[2786]: E0515 16:28:56.322009 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.322082 kubelet[2786]: W0515 16:28:56.322020 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.322082 kubelet[2786]: E0515 16:28:56.322030 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.328129 kubelet[2786]: E0515 16:28:56.322316 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.328129 kubelet[2786]: W0515 16:28:56.322327 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.328129 kubelet[2786]: E0515 16:28:56.322337 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.328129 kubelet[2786]: E0515 16:28:56.323855 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.328129 kubelet[2786]: W0515 16:28:56.323901 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.328129 kubelet[2786]: E0515 16:28:56.323913 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.328129 kubelet[2786]: E0515 16:28:56.324169 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.328129 kubelet[2786]: W0515 16:28:56.324180 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.328129 kubelet[2786]: E0515 16:28:56.324190 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.328129 kubelet[2786]: E0515 16:28:56.324374 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.328517 kubelet[2786]: W0515 16:28:56.324393 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.328517 kubelet[2786]: E0515 16:28:56.324403 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.328517 kubelet[2786]: E0515 16:28:56.326533 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.328517 kubelet[2786]: W0515 16:28:56.326545 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.328517 kubelet[2786]: E0515 16:28:56.326584 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.328517 kubelet[2786]: E0515 16:28:56.327068 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.328517 kubelet[2786]: W0515 16:28:56.327079 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.328517 kubelet[2786]: E0515 16:28:56.327089 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.328517 kubelet[2786]: E0515 16:28:56.327310 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.328517 kubelet[2786]: W0515 16:28:56.327322 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.330786 kubelet[2786]: E0515 16:28:56.327338 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.330786 kubelet[2786]: E0515 16:28:56.327496 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.330786 kubelet[2786]: W0515 16:28:56.327506 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.330786 kubelet[2786]: E0515 16:28:56.327516 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.330786 kubelet[2786]: E0515 16:28:56.327699 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.330786 kubelet[2786]: W0515 16:28:56.327708 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.330786 kubelet[2786]: E0515 16:28:56.327724 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.330786 kubelet[2786]: E0515 16:28:56.328078 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.330786 kubelet[2786]: W0515 16:28:56.328090 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.330786 kubelet[2786]: E0515 16:28:56.328100 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.331691 kubelet[2786]: E0515 16:28:56.328262 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.331691 kubelet[2786]: W0515 16:28:56.328272 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.331691 kubelet[2786]: E0515 16:28:56.328282 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.331691 kubelet[2786]: E0515 16:28:56.330360 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.331691 kubelet[2786]: W0515 16:28:56.330378 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.331691 kubelet[2786]: E0515 16:28:56.330389 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.331691 kubelet[2786]: E0515 16:28:56.330708 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.331691 kubelet[2786]: W0515 16:28:56.330724 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.331691 kubelet[2786]: E0515 16:28:56.330735 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.331691 kubelet[2786]: E0515 16:28:56.331232 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.335383 kubelet[2786]: W0515 16:28:56.331248 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.335383 kubelet[2786]: E0515 16:28:56.331258 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.335383 kubelet[2786]: E0515 16:28:56.331454 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.335383 kubelet[2786]: W0515 16:28:56.331478 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.335383 kubelet[2786]: E0515 16:28:56.331489 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.335383 kubelet[2786]: E0515 16:28:56.331655 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.335383 kubelet[2786]: W0515 16:28:56.331674 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.335383 kubelet[2786]: E0515 16:28:56.331690 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.335383 kubelet[2786]: E0515 16:28:56.331910 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.335383 kubelet[2786]: W0515 16:28:56.331921 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.335817 kubelet[2786]: E0515 16:28:56.331931 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.335817 kubelet[2786]: E0515 16:28:56.332134 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.335817 kubelet[2786]: W0515 16:28:56.332144 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.335817 kubelet[2786]: E0515 16:28:56.332153 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.335817 kubelet[2786]: E0515 16:28:56.332307 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.335817 kubelet[2786]: W0515 16:28:56.332317 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.335817 kubelet[2786]: E0515 16:28:56.332340 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.335817 kubelet[2786]: E0515 16:28:56.332564 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.335817 kubelet[2786]: W0515 16:28:56.332574 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.335817 kubelet[2786]: E0515 16:28:56.332584 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.338286 kubelet[2786]: E0515 16:28:56.332841 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.338286 kubelet[2786]: W0515 16:28:56.332974 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.338286 kubelet[2786]: E0515 16:28:56.332987 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.338286 kubelet[2786]: E0515 16:28:56.333439 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.338286 kubelet[2786]: W0515 16:28:56.333452 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.338286 kubelet[2786]: E0515 16:28:56.333462 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.338286 kubelet[2786]: E0515 16:28:56.333625 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.338286 kubelet[2786]: W0515 16:28:56.333634 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.338286 kubelet[2786]: E0515 16:28:56.333645 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.338286 kubelet[2786]: E0515 16:28:56.333788 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.338657 kubelet[2786]: W0515 16:28:56.333799 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.338657 kubelet[2786]: E0515 16:28:56.333809 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.338657 kubelet[2786]: E0515 16:28:56.334001 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.338657 kubelet[2786]: W0515 16:28:56.334011 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.338657 kubelet[2786]: E0515 16:28:56.334021 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.338657 kubelet[2786]: E0515 16:28:56.334255 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.338657 kubelet[2786]: W0515 16:28:56.334266 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.338657 kubelet[2786]: E0515 16:28:56.334277 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.338657 kubelet[2786]: E0515 16:28:56.334473 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.338657 kubelet[2786]: W0515 16:28:56.334482 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.341011 kubelet[2786]: E0515 16:28:56.334491 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.341011 kubelet[2786]: E0515 16:28:56.334647 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.341011 kubelet[2786]: W0515 16:28:56.334656 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.341011 kubelet[2786]: E0515 16:28:56.334665 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.341011 kubelet[2786]: E0515 16:28:56.334809 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.341011 kubelet[2786]: W0515 16:28:56.334819 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.341011 kubelet[2786]: E0515 16:28:56.334828 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.341011 kubelet[2786]: E0515 16:28:56.335058 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.341011 kubelet[2786]: W0515 16:28:56.335068 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.341011 kubelet[2786]: E0515 16:28:56.335077 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.341504 kubelet[2786]: E0515 16:28:56.335363 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.341504 kubelet[2786]: W0515 16:28:56.335379 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.341504 kubelet[2786]: E0515 16:28:56.335389 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.341504 kubelet[2786]: I0515 16:28:56.335435 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/959f2504-6d0c-476e-a05f-9ba36f5d930f-kubelet-dir\") pod \"csi-node-driver-7nr2s\" (UID: \"959f2504-6d0c-476e-a05f-9ba36f5d930f\") " pod="calico-system/csi-node-driver-7nr2s" May 15 16:28:56.341504 kubelet[2786]: E0515 16:28:56.335774 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.341504 kubelet[2786]: W0515 16:28:56.335795 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.341504 kubelet[2786]: E0515 16:28:56.335830 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.341504 kubelet[2786]: I0515 16:28:56.335852 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/959f2504-6d0c-476e-a05f-9ba36f5d930f-socket-dir\") pod \"csi-node-driver-7nr2s\" (UID: \"959f2504-6d0c-476e-a05f-9ba36f5d930f\") " pod="calico-system/csi-node-driver-7nr2s" May 15 16:28:56.341504 kubelet[2786]: E0515 16:28:56.337695 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.341822 kubelet[2786]: W0515 16:28:56.337713 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.341822 kubelet[2786]: E0515 16:28:56.337729 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.341822 kubelet[2786]: E0515 16:28:56.337922 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.341822 kubelet[2786]: W0515 16:28:56.337933 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.341822 kubelet[2786]: E0515 16:28:56.337942 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.341822 kubelet[2786]: E0515 16:28:56.339093 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.341822 kubelet[2786]: W0515 16:28:56.339105 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.341822 kubelet[2786]: E0515 16:28:56.339163 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.341822 kubelet[2786]: I0515 16:28:56.339194 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4tj\" (UniqueName: \"kubernetes.io/projected/959f2504-6d0c-476e-a05f-9ba36f5d930f-kube-api-access-mn4tj\") pod \"csi-node-driver-7nr2s\" (UID: \"959f2504-6d0c-476e-a05f-9ba36f5d930f\") " pod="calico-system/csi-node-driver-7nr2s" May 15 16:28:56.344986 kubelet[2786]: E0515 16:28:56.340660 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.344986 kubelet[2786]: W0515 16:28:56.340672 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.344986 kubelet[2786]: E0515 16:28:56.340701 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.344986 kubelet[2786]: E0515 16:28:56.340947 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.344986 kubelet[2786]: W0515 16:28:56.340957 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.344986 kubelet[2786]: E0515 16:28:56.341087 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.344986 kubelet[2786]: E0515 16:28:56.341174 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.344986 kubelet[2786]: W0515 16:28:56.341184 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.344986 kubelet[2786]: E0515 16:28:56.341251 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.346984 containerd[1540]: time="2025-05-15T16:28:56.344559925Z" level=info msg="connecting to shim 489724a1091d53f7f52483284b5986d0052265de80cc0fa21b0b32e1c2f4239c" address="unix:///run/containerd/s/c5b9596e04231bfce23c107c8340caf8d5001e3ef4bbd68569ce9b564b2ab4bc" namespace=k8s.io protocol=ttrpc version=3 May 15 16:28:56.347029 kubelet[2786]: I0515 16:28:56.341288 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/959f2504-6d0c-476e-a05f-9ba36f5d930f-registration-dir\") pod \"csi-node-driver-7nr2s\" (UID: \"959f2504-6d0c-476e-a05f-9ba36f5d930f\") " pod="calico-system/csi-node-driver-7nr2s" May 15 16:28:56.347029 kubelet[2786]: E0515 16:28:56.341464 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.347029 kubelet[2786]: W0515 16:28:56.341475 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.347029 kubelet[2786]: E0515 16:28:56.341498 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.347029 kubelet[2786]: E0515 16:28:56.341669 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.347029 kubelet[2786]: W0515 16:28:56.341681 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.347029 kubelet[2786]: E0515 16:28:56.341703 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.347029 kubelet[2786]: E0515 16:28:56.341891 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.347029 kubelet[2786]: W0515 16:28:56.341901 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.347375 kubelet[2786]: E0515 16:28:56.341923 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.347375 kubelet[2786]: I0515 16:28:56.341941 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/959f2504-6d0c-476e-a05f-9ba36f5d930f-varrun\") pod \"csi-node-driver-7nr2s\" (UID: \"959f2504-6d0c-476e-a05f-9ba36f5d930f\") " pod="calico-system/csi-node-driver-7nr2s" May 15 16:28:56.347375 kubelet[2786]: E0515 16:28:56.342173 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.347375 kubelet[2786]: W0515 16:28:56.342184 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.347375 kubelet[2786]: E0515 16:28:56.342207 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.347375 kubelet[2786]: E0515 16:28:56.342385 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.347375 kubelet[2786]: W0515 16:28:56.342395 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.347375 kubelet[2786]: E0515 16:28:56.342429 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.347375 kubelet[2786]: E0515 16:28:56.342604 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.347673 kubelet[2786]: W0515 16:28:56.342614 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.347673 kubelet[2786]: E0515 16:28:56.342624 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.347673 kubelet[2786]: E0515 16:28:56.343075 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.347673 kubelet[2786]: W0515 16:28:56.343086 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.347673 kubelet[2786]: E0515 16:28:56.343096 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.375361 containerd[1540]: time="2025-05-15T16:28:56.374885135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wlbtn,Uid:8ae61cac-23ac-4e0b-844e-4422aae3b5cf,Namespace:calico-system,Attempt:0,}" May 15 16:28:56.396153 systemd[1]: Started cri-containerd-489724a1091d53f7f52483284b5986d0052265de80cc0fa21b0b32e1c2f4239c.scope - libcontainer container 489724a1091d53f7f52483284b5986d0052265de80cc0fa21b0b32e1c2f4239c. May 15 16:28:56.420823 containerd[1540]: time="2025-05-15T16:28:56.420462228Z" level=info msg="connecting to shim 94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1" address="unix:///run/containerd/s/3728dfe1c80fa4812f24822b484f193a44f415645b619ac7585a6fd30e8ed90a" namespace=k8s.io protocol=ttrpc version=3 May 15 16:28:56.443484 kubelet[2786]: E0515 16:28:56.443338 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.443898 kubelet[2786]: W0515 16:28:56.443731 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.443898 kubelet[2786]: E0515 16:28:56.443763 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.444657 kubelet[2786]: E0515 16:28:56.444614 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.445055 kubelet[2786]: W0515 16:28:56.444903 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.445430 kubelet[2786]: E0515 16:28:56.445139 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.445610 kubelet[2786]: E0515 16:28:56.445590 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.446223 kubelet[2786]: W0515 16:28:56.445667 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.446223 kubelet[2786]: E0515 16:28:56.446090 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.446468 kubelet[2786]: E0515 16:28:56.446429 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.446468 kubelet[2786]: W0515 16:28:56.446442 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.446652 kubelet[2786]: E0515 16:28:56.446567 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.446858 kubelet[2786]: E0515 16:28:56.446845 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.447052 kubelet[2786]: W0515 16:28:56.446981 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.447417 kubelet[2786]: E0515 16:28:56.447373 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.448094 kubelet[2786]: E0515 16:28:56.448010 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.448094 kubelet[2786]: W0515 16:28:56.448023 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.448609 kubelet[2786]: E0515 16:28:56.448420 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.449044 kubelet[2786]: E0515 16:28:56.448911 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.449044 kubelet[2786]: W0515 16:28:56.448923 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.449440 kubelet[2786]: E0515 16:28:56.449424 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.450053 kubelet[2786]: E0515 16:28:56.450013 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.450282 kubelet[2786]: W0515 16:28:56.450123 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.450466 kubelet[2786]: E0515 16:28:56.450358 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.451974 kubelet[2786]: E0515 16:28:56.451790 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.452178 kubelet[2786]: W0515 16:28:56.452140 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.452823 kubelet[2786]: E0515 16:28:56.452805 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.453109 kubelet[2786]: E0515 16:28:56.453082 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.453109 kubelet[2786]: W0515 16:28:56.453095 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.453303 kubelet[2786]: E0515 16:28:56.453278 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.453515 kubelet[2786]: E0515 16:28:56.453488 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.453515 kubelet[2786]: W0515 16:28:56.453500 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.453729 kubelet[2786]: E0515 16:28:56.453706 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.454035 kubelet[2786]: E0515 16:28:56.454003 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.454035 kubelet[2786]: W0515 16:28:56.454014 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.454286 kubelet[2786]: E0515 16:28:56.454131 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.454589 kubelet[2786]: E0515 16:28:56.454577 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.454734 kubelet[2786]: W0515 16:28:56.454633 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.454734 kubelet[2786]: E0515 16:28:56.454648 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.454991 kubelet[2786]: E0515 16:28:56.454967 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.454991 kubelet[2786]: W0515 16:28:56.454979 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.455170 kubelet[2786]: E0515 16:28:56.455156 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.455538 kubelet[2786]: E0515 16:28:56.455463 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.455538 kubelet[2786]: W0515 16:28:56.455473 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.455757 kubelet[2786]: E0515 16:28:56.455743 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.455965 kubelet[2786]: E0515 16:28:56.455839 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.455965 kubelet[2786]: W0515 16:28:56.455905 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.456200 kubelet[2786]: E0515 16:28:56.456188 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.456792 kubelet[2786]: E0515 16:28:56.456778 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.456960 kubelet[2786]: W0515 16:28:56.456851 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.457445 kubelet[2786]: E0515 16:28:56.457400 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.457445 kubelet[2786]: W0515 16:28:56.457412 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.458050 kubelet[2786]: E0515 16:28:56.457911 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.458050 kubelet[2786]: W0515 16:28:56.457923 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.459538 kubelet[2786]: E0515 16:28:56.459101 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.459538 kubelet[2786]: W0515 16:28:56.459117 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.459538 kubelet[2786]: E0515 16:28:56.459129 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.459538 kubelet[2786]: E0515 16:28:56.459153 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.460638 kubelet[2786]: E0515 16:28:56.459955 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.460638 kubelet[2786]: W0515 16:28:56.459968 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.460638 kubelet[2786]: E0515 16:28:56.459979 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.461295 kubelet[2786]: E0515 16:28:56.460942 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.461423 kubelet[2786]: W0515 16:28:56.461407 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.461497 kubelet[2786]: E0515 16:28:56.461484 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.461756 kubelet[2786]: E0515 16:28:56.461203 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.462004 kubelet[2786]: E0515 16:28:56.461944 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.462706 kubelet[2786]: W0515 16:28:56.462357 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.462706 kubelet[2786]: E0515 16:28:56.462377 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.462706 kubelet[2786]: E0515 16:28:56.461213 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.463084 kubelet[2786]: E0515 16:28:56.463071 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.463191 kubelet[2786]: W0515 16:28:56.463173 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.463341 kubelet[2786]: E0515 16:28:56.463266 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.463632 kubelet[2786]: E0515 16:28:56.463619 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.463746 kubelet[2786]: W0515 16:28:56.463709 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.463746 kubelet[2786]: E0515 16:28:56.463724 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.480372 systemd[1]: Started cri-containerd-94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1.scope - libcontainer container 94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1. May 15 16:28:56.491120 kubelet[2786]: E0515 16:28:56.491094 2786 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 16:28:56.491634 kubelet[2786]: W0515 16:28:56.491616 2786 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 16:28:56.491800 kubelet[2786]: E0515 16:28:56.491722 2786 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 16:28:56.531837 containerd[1540]: time="2025-05-15T16:28:56.531756172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wlbtn,Uid:8ae61cac-23ac-4e0b-844e-4422aae3b5cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1\"" May 15 16:28:56.537146 containerd[1540]: time="2025-05-15T16:28:56.537115033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 16:28:56.556828 containerd[1540]: time="2025-05-15T16:28:56.556765863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86c98d4c89-vz8r2,Uid:9ae5563f-b345-4cab-8ec8-742caeb4987f,Namespace:calico-system,Attempt:0,} returns sandbox id \"489724a1091d53f7f52483284b5986d0052265de80cc0fa21b0b32e1c2f4239c\"" May 15 16:28:57.710982 kubelet[2786]: E0515 16:28:57.710207 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nr2s" podUID="959f2504-6d0c-476e-a05f-9ba36f5d930f" May 15 16:28:58.662916 containerd[1540]: time="2025-05-15T16:28:58.662845477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:58.664733 containerd[1540]: time="2025-05-15T16:28:58.664697336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 15 16:28:58.666911 containerd[1540]: time="2025-05-15T16:28:58.665798645Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:58.668829 containerd[1540]: time="2025-05-15T16:28:58.668786309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:28:58.670123 containerd[1540]: time="2025-05-15T16:28:58.670088625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.132337726s" May 15 16:28:58.670193 containerd[1540]: time="2025-05-15T16:28:58.670122669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 15 16:28:58.671183 containerd[1540]: time="2025-05-15T16:28:58.671148557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 16:28:58.673557 containerd[1540]: time="2025-05-15T16:28:58.673514913Z" level=info msg="CreateContainer within sandbox \"94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 16:28:58.689204 containerd[1540]: time="2025-05-15T16:28:58.689147784Z" level=info msg="Container b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c: CDI devices from CRI Config.CDIDevices: []" May 15 16:28:58.701449 containerd[1540]: time="2025-05-15T16:28:58.701400926Z" level=info msg="CreateContainer within sandbox \"94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c\"" May 15 16:28:58.702034 containerd[1540]: time="2025-05-15T16:28:58.702004981Z" level=info msg="StartContainer for \"b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c\"" May 15 16:28:58.704173 containerd[1540]: time="2025-05-15T16:28:58.704129272Z" level=info msg="connecting to shim b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c" address="unix:///run/containerd/s/3728dfe1c80fa4812f24822b484f193a44f415645b619ac7585a6fd30e8ed90a" protocol=ttrpc version=3 May 15 16:28:58.733050 systemd[1]: Started cri-containerd-b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c.scope - libcontainer container b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c. May 15 16:28:58.777923 containerd[1540]: time="2025-05-15T16:28:58.777819223Z" level=info msg="StartContainer for \"b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c\" returns successfully" May 15 16:28:58.792073 systemd[1]: cri-containerd-b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c.scope: Deactivated successfully. May 15 16:28:58.796542 containerd[1540]: time="2025-05-15T16:28:58.796509439Z" level=info msg="received exit event container_id:\"b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c\" id:\"b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c\" pid:3381 exited_at:{seconds:1747326538 nanos:796208123}" May 15 16:28:58.796959 containerd[1540]: time="2025-05-15T16:28:58.796803782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c\" id:\"b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c\" pid:3381 exited_at:{seconds:1747326538 nanos:796208123}" May 15 16:28:58.819198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c-rootfs.mount: Deactivated successfully. May 15 16:28:59.709806 kubelet[2786]: E0515 16:28:59.708222 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nr2s" podUID="959f2504-6d0c-476e-a05f-9ba36f5d930f" May 15 16:29:01.707937 kubelet[2786]: E0515 16:29:01.707594 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nr2s" podUID="959f2504-6d0c-476e-a05f-9ba36f5d930f" May 15 16:29:01.921632 containerd[1540]: time="2025-05-15T16:29:01.921496253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:01.924063 containerd[1540]: time="2025-05-15T16:29:01.924041724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 15 16:29:01.925395 containerd[1540]: time="2025-05-15T16:29:01.925357686Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:01.928535 containerd[1540]: time="2025-05-15T16:29:01.928456756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:01.929751 containerd[1540]: time="2025-05-15T16:29:01.929675265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.258216005s" May 15 16:29:01.929751 containerd[1540]: time="2025-05-15T16:29:01.929722945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 15 16:29:01.932577 containerd[1540]: time="2025-05-15T16:29:01.932459294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 16:29:01.949972 containerd[1540]: time="2025-05-15T16:29:01.949194327Z" level=info msg="CreateContainer within sandbox \"489724a1091d53f7f52483284b5986d0052265de80cc0fa21b0b32e1c2f4239c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 16:29:01.968343 containerd[1540]: time="2025-05-15T16:29:01.968176900Z" level=info msg="Container 7eb500d459a0501f30eabf89444ddb8819c6d4d770fb0a7f73a7961ae065fd36: CDI devices from CRI Config.CDIDevices: []" May 15 16:29:01.973954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917816885.mount: Deactivated successfully. May 15 16:29:01.984574 containerd[1540]: time="2025-05-15T16:29:01.984486885Z" level=info msg="CreateContainer within sandbox \"489724a1091d53f7f52483284b5986d0052265de80cc0fa21b0b32e1c2f4239c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7eb500d459a0501f30eabf89444ddb8819c6d4d770fb0a7f73a7961ae065fd36\"" May 15 16:29:01.985557 containerd[1540]: time="2025-05-15T16:29:01.985419827Z" level=info msg="StartContainer for \"7eb500d459a0501f30eabf89444ddb8819c6d4d770fb0a7f73a7961ae065fd36\"" May 15 16:29:01.987665 containerd[1540]: time="2025-05-15T16:29:01.987610281Z" level=info msg="connecting to shim 7eb500d459a0501f30eabf89444ddb8819c6d4d770fb0a7f73a7961ae065fd36" address="unix:///run/containerd/s/c5b9596e04231bfce23c107c8340caf8d5001e3ef4bbd68569ce9b564b2ab4bc" protocol=ttrpc version=3 May 15 16:29:02.035056 systemd[1]: Started cri-containerd-7eb500d459a0501f30eabf89444ddb8819c6d4d770fb0a7f73a7961ae065fd36.scope - libcontainer container 7eb500d459a0501f30eabf89444ddb8819c6d4d770fb0a7f73a7961ae065fd36. May 15 16:29:02.091475 containerd[1540]: time="2025-05-15T16:29:02.091403766Z" level=info msg="StartContainer for \"7eb500d459a0501f30eabf89444ddb8819c6d4d770fb0a7f73a7961ae065fd36\" returns successfully" May 15 16:29:02.937243 kubelet[2786]: I0515 16:29:02.936425 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86c98d4c89-vz8r2" podStartSLOduration=2.563262584 podStartE2EDuration="7.936336794s" podCreationTimestamp="2025-05-15 16:28:55 +0000 UTC" firstStartedPulling="2025-05-15 16:28:56.559176533 +0000 UTC m=+13.966730220" lastFinishedPulling="2025-05-15 16:29:01.932250753 +0000 UTC m=+19.339804430" observedRunningTime="2025-05-15 16:29:02.931988578 +0000 UTC m=+20.339542315" watchObservedRunningTime="2025-05-15 16:29:02.936336794 +0000 UTC m=+20.343890521" May 15 16:29:03.707684 kubelet[2786]: E0515 16:29:03.707473 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nr2s" podUID="959f2504-6d0c-476e-a05f-9ba36f5d930f" May 15 16:29:05.707803 kubelet[2786]: E0515 16:29:05.707747 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nr2s" podUID="959f2504-6d0c-476e-a05f-9ba36f5d930f" May 15 16:29:07.707847 kubelet[2786]: E0515 16:29:07.707790 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nr2s" podUID="959f2504-6d0c-476e-a05f-9ba36f5d930f" May 15 16:29:08.218019 containerd[1540]: time="2025-05-15T16:29:08.217800043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:08.219813 containerd[1540]: time="2025-05-15T16:29:08.219771174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 15 16:29:08.221044 containerd[1540]: time="2025-05-15T16:29:08.221008306Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:08.224779 containerd[1540]: time="2025-05-15T16:29:08.224709174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:08.225344 containerd[1540]: time="2025-05-15T16:29:08.225176601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.292654639s" May 15 16:29:08.225344 containerd[1540]: time="2025-05-15T16:29:08.225215023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 15 16:29:08.227799 containerd[1540]: time="2025-05-15T16:29:08.227766593Z" level=info msg="CreateContainer within sandbox \"94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 16:29:08.244947 containerd[1540]: time="2025-05-15T16:29:08.244906289Z" level=info msg="Container 6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e: CDI devices from CRI Config.CDIDevices: []" May 15 16:29:08.262070 containerd[1540]: time="2025-05-15T16:29:08.262029393Z" level=info msg="CreateContainer within sandbox \"94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e\"" May 15 16:29:08.264199 containerd[1540]: time="2025-05-15T16:29:08.264146618Z" level=info msg="StartContainer for \"6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e\"" May 15 16:29:08.266167 containerd[1540]: time="2025-05-15T16:29:08.266110594Z" level=info msg="connecting to shim 6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e" address="unix:///run/containerd/s/3728dfe1c80fa4812f24822b484f193a44f415645b619ac7585a6fd30e8ed90a" protocol=ttrpc version=3 May 15 16:29:08.296085 systemd[1]: Started cri-containerd-6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e.scope - libcontainer container 6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e. May 15 16:29:08.347901 containerd[1540]: time="2025-05-15T16:29:08.347128856Z" level=info msg="StartContainer for \"6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e\" returns successfully" May 15 16:29:09.707785 kubelet[2786]: E0515 16:29:09.707550 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7nr2s" podUID="959f2504-6d0c-476e-a05f-9ba36f5d930f" May 15 16:29:10.282618 systemd[1]: cri-containerd-6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e.scope: Deactivated successfully. May 15 16:29:10.284847 systemd[1]: cri-containerd-6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e.scope: Consumed 717ms CPU time, 171.9M memory peak, 154M written to disk. May 15 16:29:10.295168 containerd[1540]: time="2025-05-15T16:29:10.294910506Z" level=info msg="received exit event container_id:\"6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e\" id:\"6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e\" pid:3480 exited_at:{seconds:1747326550 nanos:294386642}" May 15 16:29:10.296447 containerd[1540]: time="2025-05-15T16:29:10.295854889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e\" id:\"6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e\" pid:3480 exited_at:{seconds:1747326550 nanos:294386642}" May 15 16:29:10.317107 kubelet[2786]: I0515 16:29:10.316618 2786 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 16:29:10.360738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e-rootfs.mount: Deactivated successfully. May 15 16:29:10.450352 kubelet[2786]: W0515 16:29:10.450075 2786 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4334-0-0-a-855fb07f2a.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4334-0-0-a-855fb07f2a.novalocal' and this object May 15 16:29:10.450352 kubelet[2786]: E0515 16:29:10.450219 2786 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4334-0-0-a-855fb07f2a.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4334-0-0-a-855fb07f2a.novalocal' and this object" logger="UnhandledError" May 15 16:29:10.522769 kubelet[2786]: I0515 16:29:10.471493 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f15c2804-51fb-4484-8678-65ec76ec5acb-config-volume\") pod \"coredns-6f6b679f8f-s5czs\" (UID: \"f15c2804-51fb-4484-8678-65ec76ec5acb\") " pod="kube-system/coredns-6f6b679f8f-s5czs" May 15 16:29:10.522769 kubelet[2786]: I0515 16:29:10.471611 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqks8\" (UniqueName: \"kubernetes.io/projected/f15c2804-51fb-4484-8678-65ec76ec5acb-kube-api-access-mqks8\") pod \"coredns-6f6b679f8f-s5czs\" (UID: \"f15c2804-51fb-4484-8678-65ec76ec5acb\") " pod="kube-system/coredns-6f6b679f8f-s5czs" May 15 16:29:10.467568 systemd[1]: Created slice kubepods-burstable-podf15c2804_51fb_4484_8678_65ec76ec5acb.slice - libcontainer container kubepods-burstable-podf15c2804_51fb_4484_8678_65ec76ec5acb.slice. May 15 16:29:10.569361 systemd[1]: Created slice kubepods-besteffort-pod12ca99da_51df_40ee_9827_8e6c699985d2.slice - libcontainer container kubepods-besteffort-pod12ca99da_51df_40ee_9827_8e6c699985d2.slice. May 15 16:29:10.575610 kubelet[2786]: I0515 16:29:10.573191 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmkxq\" (UniqueName: \"kubernetes.io/projected/aeb574cf-384e-4f72-96fe-7c6174fca348-kube-api-access-wmkxq\") pod \"coredns-6f6b679f8f-v9hd8\" (UID: \"aeb574cf-384e-4f72-96fe-7c6174fca348\") " pod="kube-system/coredns-6f6b679f8f-v9hd8" May 15 16:29:10.575610 kubelet[2786]: I0515 16:29:10.573284 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aeb574cf-384e-4f72-96fe-7c6174fca348-config-volume\") pod \"coredns-6f6b679f8f-v9hd8\" (UID: \"aeb574cf-384e-4f72-96fe-7c6174fca348\") " pod="kube-system/coredns-6f6b679f8f-v9hd8" May 15 16:29:10.590305 systemd[1]: Created slice kubepods-burstable-podaeb574cf_384e_4f72_96fe_7c6174fca348.slice - libcontainer container kubepods-burstable-podaeb574cf_384e_4f72_96fe_7c6174fca348.slice. May 15 16:29:10.597592 systemd[1]: Created slice kubepods-besteffort-pod78f71923_9d79_4c68_9efd_57501e9f85c4.slice - libcontainer container kubepods-besteffort-pod78f71923_9d79_4c68_9efd_57501e9f85c4.slice. May 15 16:29:10.603336 systemd[1]: Created slice kubepods-besteffort-pod1306ce75_0224_482a_9247_f27c93c10517.slice - libcontainer container kubepods-besteffort-pod1306ce75_0224_482a_9247_f27c93c10517.slice. May 15 16:29:10.674299 kubelet[2786]: I0515 16:29:10.673511 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkkvx\" (UniqueName: \"kubernetes.io/projected/12ca99da-51df-40ee-9827-8e6c699985d2-kube-api-access-mkkvx\") pod \"calico-apiserver-85d5cbdb48-m6cvl\" (UID: \"12ca99da-51df-40ee-9827-8e6c699985d2\") " pod="calico-apiserver/calico-apiserver-85d5cbdb48-m6cvl" May 15 16:29:10.674299 kubelet[2786]: I0515 16:29:10.673598 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12ca99da-51df-40ee-9827-8e6c699985d2-calico-apiserver-certs\") pod \"calico-apiserver-85d5cbdb48-m6cvl\" (UID: \"12ca99da-51df-40ee-9827-8e6c699985d2\") " pod="calico-apiserver/calico-apiserver-85d5cbdb48-m6cvl" May 15 16:29:10.774548 kubelet[2786]: I0515 16:29:10.774475 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78f71923-9d79-4c68-9efd-57501e9f85c4-calico-apiserver-certs\") pod \"calico-apiserver-85d5cbdb48-6t246\" (UID: \"78f71923-9d79-4c68-9efd-57501e9f85c4\") " pod="calico-apiserver/calico-apiserver-85d5cbdb48-6t246" May 15 16:29:10.776630 kubelet[2786]: I0515 16:29:10.775541 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vczdl\" (UniqueName: \"kubernetes.io/projected/78f71923-9d79-4c68-9efd-57501e9f85c4-kube-api-access-vczdl\") pod \"calico-apiserver-85d5cbdb48-6t246\" (UID: \"78f71923-9d79-4c68-9efd-57501e9f85c4\") " pod="calico-apiserver/calico-apiserver-85d5cbdb48-6t246" May 15 16:29:10.776630 kubelet[2786]: I0515 16:29:10.775615 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1306ce75-0224-482a-9247-f27c93c10517-tigera-ca-bundle\") pod \"calico-kube-controllers-7b79df777b-82jtl\" (UID: \"1306ce75-0224-482a-9247-f27c93c10517\") " pod="calico-system/calico-kube-controllers-7b79df777b-82jtl" May 15 16:29:10.776630 kubelet[2786]: I0515 16:29:10.775668 2786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nf7q\" (UniqueName: \"kubernetes.io/projected/1306ce75-0224-482a-9247-f27c93c10517-kube-api-access-6nf7q\") pod \"calico-kube-controllers-7b79df777b-82jtl\" (UID: \"1306ce75-0224-482a-9247-f27c93c10517\") " pod="calico-system/calico-kube-controllers-7b79df777b-82jtl" May 15 16:29:11.183595 containerd[1540]: time="2025-05-15T16:29:11.183457134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d5cbdb48-m6cvl,Uid:12ca99da-51df-40ee-9827-8e6c699985d2,Namespace:calico-apiserver,Attempt:0,}" May 15 16:29:11.203120 containerd[1540]: time="2025-05-15T16:29:11.202709409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d5cbdb48-6t246,Uid:78f71923-9d79-4c68-9efd-57501e9f85c4,Namespace:calico-apiserver,Attempt:0,}" May 15 16:29:11.209798 containerd[1540]: time="2025-05-15T16:29:11.209351576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b79df777b-82jtl,Uid:1306ce75-0224-482a-9247-f27c93c10517,Namespace:calico-system,Attempt:0,}" May 15 16:29:11.343103 containerd[1540]: time="2025-05-15T16:29:11.343034401Z" level=error msg="Failed to destroy network for sandbox \"fad5b0d19aeb88f0029fa38045887959b5f512057f1b772a4161b267ee588198\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.347018 containerd[1540]: time="2025-05-15T16:29:11.346968074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b79df777b-82jtl,Uid:1306ce75-0224-482a-9247-f27c93c10517,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad5b0d19aeb88f0029fa38045887959b5f512057f1b772a4161b267ee588198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.348503 kubelet[2786]: E0515 16:29:11.347254 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad5b0d19aeb88f0029fa38045887959b5f512057f1b772a4161b267ee588198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.348503 kubelet[2786]: E0515 16:29:11.347387 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad5b0d19aeb88f0029fa38045887959b5f512057f1b772a4161b267ee588198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b79df777b-82jtl" May 15 16:29:11.348503 kubelet[2786]: E0515 16:29:11.347421 2786 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad5b0d19aeb88f0029fa38045887959b5f512057f1b772a4161b267ee588198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b79df777b-82jtl" May 15 16:29:11.348621 kubelet[2786]: E0515 16:29:11.347476 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b79df777b-82jtl_calico-system(1306ce75-0224-482a-9247-f27c93c10517)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b79df777b-82jtl_calico-system(1306ce75-0224-482a-9247-f27c93c10517)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fad5b0d19aeb88f0029fa38045887959b5f512057f1b772a4161b267ee588198\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b79df777b-82jtl" podUID="1306ce75-0224-482a-9247-f27c93c10517" May 15 16:29:11.351043 containerd[1540]: time="2025-05-15T16:29:11.351006995Z" level=error msg="Failed to destroy network for sandbox \"1af8432cb3dced671484aed6efe3d018ad0fae814b7a475c4d76b0133321d660\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.353309 containerd[1540]: time="2025-05-15T16:29:11.353271736Z" level=error msg="Failed to destroy network for sandbox \"4fafa0846403acbd81980df1f13f2d79e8427bc29b3a7475cab15f75a8e8a8bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.353624 containerd[1540]: time="2025-05-15T16:29:11.353454709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d5cbdb48-m6cvl,Uid:12ca99da-51df-40ee-9827-8e6c699985d2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af8432cb3dced671484aed6efe3d018ad0fae814b7a475c4d76b0133321d660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.354182 kubelet[2786]: E0515 16:29:11.353967 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af8432cb3dced671484aed6efe3d018ad0fae814b7a475c4d76b0133321d660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.354182 kubelet[2786]: E0515 16:29:11.354020 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af8432cb3dced671484aed6efe3d018ad0fae814b7a475c4d76b0133321d660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d5cbdb48-m6cvl" May 15 16:29:11.354182 kubelet[2786]: E0515 16:29:11.354043 2786 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1af8432cb3dced671484aed6efe3d018ad0fae814b7a475c4d76b0133321d660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d5cbdb48-m6cvl" May 15 16:29:11.354327 kubelet[2786]: E0515 16:29:11.354108 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85d5cbdb48-m6cvl_calico-apiserver(12ca99da-51df-40ee-9827-8e6c699985d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85d5cbdb48-m6cvl_calico-apiserver(12ca99da-51df-40ee-9827-8e6c699985d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1af8432cb3dced671484aed6efe3d018ad0fae814b7a475c4d76b0133321d660\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d5cbdb48-m6cvl" podUID="12ca99da-51df-40ee-9827-8e6c699985d2" May 15 16:29:11.356116 containerd[1540]: time="2025-05-15T16:29:11.356057615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d5cbdb48-6t246,Uid:78f71923-9d79-4c68-9efd-57501e9f85c4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fafa0846403acbd81980df1f13f2d79e8427bc29b3a7475cab15f75a8e8a8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.356556 kubelet[2786]: E0515 16:29:11.356379 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fafa0846403acbd81980df1f13f2d79e8427bc29b3a7475cab15f75a8e8a8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.356767 kubelet[2786]: E0515 16:29:11.356552 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fafa0846403acbd81980df1f13f2d79e8427bc29b3a7475cab15f75a8e8a8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d5cbdb48-6t246" May 15 16:29:11.356767 kubelet[2786]: E0515 16:29:11.356578 2786 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4fafa0846403acbd81980df1f13f2d79e8427bc29b3a7475cab15f75a8e8a8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d5cbdb48-6t246" May 15 16:29:11.356767 kubelet[2786]: E0515 16:29:11.356633 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85d5cbdb48-6t246_calico-apiserver(78f71923-9d79-4c68-9efd-57501e9f85c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85d5cbdb48-6t246_calico-apiserver(78f71923-9d79-4c68-9efd-57501e9f85c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4fafa0846403acbd81980df1f13f2d79e8427bc29b3a7475cab15f75a8e8a8bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d5cbdb48-6t246" podUID="78f71923-9d79-4c68-9efd-57501e9f85c4" May 15 16:29:11.424141 containerd[1540]: time="2025-05-15T16:29:11.423884222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s5czs,Uid:f15c2804-51fb-4484-8678-65ec76ec5acb,Namespace:kube-system,Attempt:0,}" May 15 16:29:11.496059 containerd[1540]: time="2025-05-15T16:29:11.495923054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9hd8,Uid:aeb574cf-384e-4f72-96fe-7c6174fca348,Namespace:kube-system,Attempt:0,}" May 15 16:29:11.525970 containerd[1540]: time="2025-05-15T16:29:11.525790173Z" level=error msg="Failed to destroy network for sandbox \"fb0f4bf8b7e456b43532673da07931a49190dfac33753b754795e892cb413be1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.528970 systemd[1]: run-netns-cni\x2d672a0950\x2d0df2\x2d5237\x2dc292\x2de0e4f4a64098.mount: Deactivated successfully. May 15 16:29:11.530552 containerd[1540]: time="2025-05-15T16:29:11.530504151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s5czs,Uid:f15c2804-51fb-4484-8678-65ec76ec5acb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0f4bf8b7e456b43532673da07931a49190dfac33753b754795e892cb413be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.532598 kubelet[2786]: E0515 16:29:11.530966 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0f4bf8b7e456b43532673da07931a49190dfac33753b754795e892cb413be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.532598 kubelet[2786]: E0515 16:29:11.532165 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0f4bf8b7e456b43532673da07931a49190dfac33753b754795e892cb413be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-s5czs" May 15 16:29:11.532598 kubelet[2786]: E0515 16:29:11.532212 2786 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0f4bf8b7e456b43532673da07931a49190dfac33753b754795e892cb413be1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-s5czs" May 15 16:29:11.533931 kubelet[2786]: E0515 16:29:11.532315 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-s5czs_kube-system(f15c2804-51fb-4484-8678-65ec76ec5acb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-s5czs_kube-system(f15c2804-51fb-4484-8678-65ec76ec5acb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb0f4bf8b7e456b43532673da07931a49190dfac33753b754795e892cb413be1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-s5czs" podUID="f15c2804-51fb-4484-8678-65ec76ec5acb" May 15 16:29:11.568186 containerd[1540]: time="2025-05-15T16:29:11.568138077Z" level=error msg="Failed to destroy network for sandbox \"4388b5e11a1183d50396de569fe91b82f8b853e8612caa1f610dd032f4fc096e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.571093 systemd[1]: run-netns-cni\x2dc9c30a42\x2dbc18\x2d6590\x2d4485\x2d959ba154f8b8.mount: Deactivated successfully. May 15 16:29:11.572262 containerd[1540]: time="2025-05-15T16:29:11.572220630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9hd8,Uid:aeb574cf-384e-4f72-96fe-7c6174fca348,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4388b5e11a1183d50396de569fe91b82f8b853e8612caa1f610dd032f4fc096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.572645 kubelet[2786]: E0515 16:29:11.572448 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4388b5e11a1183d50396de569fe91b82f8b853e8612caa1f610dd032f4fc096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.572645 kubelet[2786]: E0515 16:29:11.572513 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4388b5e11a1183d50396de569fe91b82f8b853e8612caa1f610dd032f4fc096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9hd8" May 15 16:29:11.572645 kubelet[2786]: E0515 16:29:11.572536 2786 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4388b5e11a1183d50396de569fe91b82f8b853e8612caa1f610dd032f4fc096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9hd8" May 15 16:29:11.573741 kubelet[2786]: E0515 16:29:11.572856 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v9hd8_kube-system(aeb574cf-384e-4f72-96fe-7c6174fca348)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v9hd8_kube-system(aeb574cf-384e-4f72-96fe-7c6174fca348)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4388b5e11a1183d50396de569fe91b82f8b853e8612caa1f610dd032f4fc096e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v9hd8" podUID="aeb574cf-384e-4f72-96fe-7c6174fca348" May 15 16:29:11.725039 systemd[1]: Created slice kubepods-besteffort-pod959f2504_6d0c_476e_a05f_9ba36f5d930f.slice - libcontainer container kubepods-besteffort-pod959f2504_6d0c_476e_a05f_9ba36f5d930f.slice. May 15 16:29:11.732550 containerd[1540]: time="2025-05-15T16:29:11.732014915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7nr2s,Uid:959f2504-6d0c-476e-a05f-9ba36f5d930f,Namespace:calico-system,Attempt:0,}" May 15 16:29:11.863750 containerd[1540]: time="2025-05-15T16:29:11.863603869Z" level=error msg="Failed to destroy network for sandbox \"55ec70b5fa9f6adbcac804361ae54f66026eb91667f905e5cee49cea83a7c1ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.867568 containerd[1540]: time="2025-05-15T16:29:11.866806731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7nr2s,Uid:959f2504-6d0c-476e-a05f-9ba36f5d930f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ec70b5fa9f6adbcac804361ae54f66026eb91667f905e5cee49cea83a7c1ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.867816 kubelet[2786]: E0515 16:29:11.867294 2786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ec70b5fa9f6adbcac804361ae54f66026eb91667f905e5cee49cea83a7c1ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 16:29:11.870083 kubelet[2786]: E0515 16:29:11.868035 2786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ec70b5fa9f6adbcac804361ae54f66026eb91667f905e5cee49cea83a7c1ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7nr2s" May 15 16:29:11.870083 kubelet[2786]: E0515 16:29:11.868097 2786 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55ec70b5fa9f6adbcac804361ae54f66026eb91667f905e5cee49cea83a7c1ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7nr2s" May 15 16:29:11.870083 kubelet[2786]: E0515 16:29:11.868211 2786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7nr2s_calico-system(959f2504-6d0c-476e-a05f-9ba36f5d930f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7nr2s_calico-system(959f2504-6d0c-476e-a05f-9ba36f5d930f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55ec70b5fa9f6adbcac804361ae54f66026eb91667f905e5cee49cea83a7c1ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7nr2s" podUID="959f2504-6d0c-476e-a05f-9ba36f5d930f" May 15 16:29:11.933325 containerd[1540]: time="2025-05-15T16:29:11.933241454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 16:29:21.023773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount990178046.mount: Deactivated successfully. May 15 16:29:21.454937 containerd[1540]: time="2025-05-15T16:29:21.454168337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:21.458186 containerd[1540]: time="2025-05-15T16:29:21.458038969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 15 16:29:21.460735 containerd[1540]: time="2025-05-15T16:29:21.460642514Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:21.465512 containerd[1540]: time="2025-05-15T16:29:21.465357571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:21.467860 containerd[1540]: time="2025-05-15T16:29:21.467730534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 9.534416433s" May 15 16:29:21.467860 containerd[1540]: time="2025-05-15T16:29:21.467853955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 15 16:29:21.512377 containerd[1540]: time="2025-05-15T16:29:21.512318932Z" level=info msg="CreateContainer within sandbox \"94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 16:29:21.529938 containerd[1540]: time="2025-05-15T16:29:21.529584086Z" level=info msg="Container 9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c: CDI devices from CRI Config.CDIDevices: []" May 15 16:29:21.549554 containerd[1540]: time="2025-05-15T16:29:21.549504281Z" level=info msg="CreateContainer within sandbox \"94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\"" May 15 16:29:21.550553 containerd[1540]: time="2025-05-15T16:29:21.550504207Z" level=info msg="StartContainer for \"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\"" May 15 16:29:21.553190 containerd[1540]: time="2025-05-15T16:29:21.553141636Z" level=info msg="connecting to shim 9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c" address="unix:///run/containerd/s/3728dfe1c80fa4812f24822b484f193a44f415645b619ac7585a6fd30e8ed90a" protocol=ttrpc version=3 May 15 16:29:21.587051 systemd[1]: Started cri-containerd-9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c.scope - libcontainer container 9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c. May 15 16:29:21.646494 containerd[1540]: time="2025-05-15T16:29:21.646447612Z" level=info msg="StartContainer for \"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" returns successfully" May 15 16:29:21.720892 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 16:29:21.721225 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 16:29:22.108284 containerd[1540]: time="2025-05-15T16:29:22.108226693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"381dd4ae84fc7135eed7276b3bae6cc3a8c220a57b7c35f3c0df39469b29b5af\" pid:3762 exit_status:1 exited_at:{seconds:1747326562 nanos:107360888}" May 15 16:29:22.710140 containerd[1540]: time="2025-05-15T16:29:22.709513261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7nr2s,Uid:959f2504-6d0c-476e-a05f-9ba36f5d930f,Namespace:calico-system,Attempt:0,}" May 15 16:29:22.927134 systemd-networkd[1454]: cali944d681b198: Link UP May 15 16:29:22.927810 systemd-networkd[1454]: cali944d681b198: Gained carrier May 15 16:29:22.946748 kubelet[2786]: I0515 16:29:22.946657 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wlbtn" podStartSLOduration=2.013169413 podStartE2EDuration="26.946542057s" podCreationTimestamp="2025-05-15 16:28:56 +0000 UTC" firstStartedPulling="2025-05-15 16:28:56.536908244 +0000 UTC m=+13.944461921" lastFinishedPulling="2025-05-15 16:29:21.470280838 +0000 UTC m=+38.877834565" observedRunningTime="2025-05-15 16:29:21.997076731 +0000 UTC m=+39.404630409" watchObservedRunningTime="2025-05-15 16:29:22.946542057 +0000 UTC m=+40.354095734" May 15 16:29:22.948628 containerd[1540]: 2025-05-15 16:29:22.785 [INFO][3786] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 16:29:22.948628 containerd[1540]: 2025-05-15 16:29:22.826 [INFO][3786] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0 csi-node-driver- calico-system 959f2504-6d0c-476e-a05f-9ba36f5d930f 619 0 2025-05-15 16:28:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4334-0-0-a-855fb07f2a.novalocal csi-node-driver-7nr2s eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali944d681b198 [] []}} ContainerID="6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" Namespace="calico-system" Pod="csi-node-driver-7nr2s" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-" May 15 16:29:22.948628 containerd[1540]: 2025-05-15 16:29:22.827 [INFO][3786] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" Namespace="calico-system" Pod="csi-node-driver-7nr2s" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0" May 15 16:29:22.948628 containerd[1540]: 2025-05-15 16:29:22.872 [INFO][3797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" HandleID="k8s-pod-network.6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0" May 15 16:29:22.948790 containerd[1540]: 2025-05-15 16:29:22.882 [INFO][3797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" HandleID="k8s-pod-network.6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003050e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334-0-0-a-855fb07f2a.novalocal", "pod":"csi-node-driver-7nr2s", "timestamp":"2025-05-15 16:29:22.872941527 +0000 UTC"}, Hostname:"ci-4334-0-0-a-855fb07f2a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 16:29:22.948790 containerd[1540]: 2025-05-15 16:29:22.882 [INFO][3797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 16:29:22.948790 containerd[1540]: 2025-05-15 16:29:22.882 [INFO][3797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 16:29:22.948790 containerd[1540]: 2025-05-15 16:29:22.882 [INFO][3797] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-a-855fb07f2a.novalocal' May 15 16:29:22.948790 containerd[1540]: 2025-05-15 16:29:22.885 [INFO][3797] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:22.948790 containerd[1540]: 2025-05-15 16:29:22.889 [INFO][3797] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:22.948790 containerd[1540]: 2025-05-15 16:29:22.895 [INFO][3797] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:22.948790 containerd[1540]: 2025-05-15 16:29:22.898 [INFO][3797] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:22.948790 containerd[1540]: 2025-05-15 16:29:22.900 [INFO][3797] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:22.949308 containerd[1540]: 2025-05-15 16:29:22.900 [INFO][3797] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:22.949308 containerd[1540]: 2025-05-15 16:29:22.902 [INFO][3797] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538 May 15 16:29:22.949308 containerd[1540]: 2025-05-15 16:29:22.906 [INFO][3797] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:22.949308 containerd[1540]: 2025-05-15 16:29:22.914 [INFO][3797] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.129/26] block=192.168.53.128/26 handle="k8s-pod-network.6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:22.949308 containerd[1540]: 2025-05-15 16:29:22.914 [INFO][3797] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.129/26] handle="k8s-pod-network.6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:22.949308 containerd[1540]: 2025-05-15 16:29:22.914 [INFO][3797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 16:29:22.949308 containerd[1540]: 2025-05-15 16:29:22.914 [INFO][3797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.129/26] IPv6=[] ContainerID="6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" HandleID="k8s-pod-network.6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0" May 15 16:29:22.950381 containerd[1540]: 2025-05-15 16:29:22.917 [INFO][3786] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" Namespace="calico-system" Pod="csi-node-driver-7nr2s" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"959f2504-6d0c-476e-a05f-9ba36f5d930f", ResourceVersion:"619", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"", Pod:"csi-node-driver-7nr2s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali944d681b198", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:22.950615 containerd[1540]: 2025-05-15 16:29:22.917 [INFO][3786] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.129/32] ContainerID="6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" Namespace="calico-system" Pod="csi-node-driver-7nr2s" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0" May 15 16:29:22.950615 containerd[1540]: 2025-05-15 16:29:22.917 [INFO][3786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali944d681b198 ContainerID="6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" Namespace="calico-system" Pod="csi-node-driver-7nr2s" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0" May 15 16:29:22.950615 containerd[1540]: 2025-05-15 16:29:22.927 [INFO][3786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" Namespace="calico-system" Pod="csi-node-driver-7nr2s" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0" May 15 16:29:22.950724 containerd[1540]: 2025-05-15 16:29:22.928 [INFO][3786] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" Namespace="calico-system" Pod="csi-node-driver-7nr2s" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"959f2504-6d0c-476e-a05f-9ba36f5d930f", ResourceVersion:"619", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538", Pod:"csi-node-driver-7nr2s", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali944d681b198", MAC:"42:80:6f:13:e5:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:22.950790 containerd[1540]: 2025-05-15 16:29:22.945 [INFO][3786] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" Namespace="calico-system" Pod="csi-node-driver-7nr2s" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-csi--node--driver--7nr2s-eth0" May 15 16:29:22.989558 containerd[1540]: time="2025-05-15T16:29:22.989017269Z" level=info msg="connecting to shim 6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538" address="unix:///run/containerd/s/b08d676504fd6a1b861cb28c48981f4c3181c565e92a394fdfa614f959872090" namespace=k8s.io protocol=ttrpc version=3 May 15 16:29:23.032494 systemd[1]: Started cri-containerd-6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538.scope - libcontainer container 6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538. May 15 16:29:23.065485 containerd[1540]: time="2025-05-15T16:29:23.065439170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7nr2s,Uid:959f2504-6d0c-476e-a05f-9ba36f5d930f,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538\"" May 15 16:29:23.068221 containerd[1540]: time="2025-05-15T16:29:23.068165986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 16:29:23.092400 containerd[1540]: time="2025-05-15T16:29:23.092289786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"d1da68da85881a42a6e9a01259376b57086d766d4a375d458f807d414cb79c78\" pid:3858 exit_status:1 exited_at:{seconds:1747326563 nanos:91961069}" May 15 16:29:23.770449 systemd-networkd[1454]: vxlan.calico: Link UP May 15 16:29:23.770917 systemd-networkd[1454]: vxlan.calico: Gained carrier May 15 16:29:24.600428 systemd-networkd[1454]: cali944d681b198: Gained IPv6LL May 15 16:29:24.709816 containerd[1540]: time="2025-05-15T16:29:24.709076013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9hd8,Uid:aeb574cf-384e-4f72-96fe-7c6174fca348,Namespace:kube-system,Attempt:0,}" May 15 16:29:24.926300 systemd-networkd[1454]: cali36d0f681e47: Link UP May 15 16:29:24.927640 systemd-networkd[1454]: cali36d0f681e47: Gained carrier May 15 16:29:24.950451 containerd[1540]: 2025-05-15 16:29:24.808 [INFO][4074] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0 coredns-6f6b679f8f- kube-system aeb574cf-384e-4f72-96fe-7c6174fca348 720 0 2025-05-15 16:28:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334-0-0-a-855fb07f2a.novalocal coredns-6f6b679f8f-v9hd8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali36d0f681e47 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hd8" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-" May 15 16:29:24.950451 containerd[1540]: 2025-05-15 16:29:24.810 [INFO][4074] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hd8" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0" May 15 16:29:24.950451 containerd[1540]: 2025-05-15 16:29:24.865 [INFO][4087] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" HandleID="k8s-pod-network.df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0" May 15 16:29:24.951294 containerd[1540]: 2025-05-15 16:29:24.877 [INFO][4087] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" HandleID="k8s-pod-network.df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bd050), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334-0-0-a-855fb07f2a.novalocal", "pod":"coredns-6f6b679f8f-v9hd8", "timestamp":"2025-05-15 16:29:24.865073694 +0000 UTC"}, Hostname:"ci-4334-0-0-a-855fb07f2a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 16:29:24.951294 containerd[1540]: 2025-05-15 16:29:24.877 [INFO][4087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 16:29:24.951294 containerd[1540]: 2025-05-15 16:29:24.877 [INFO][4087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 16:29:24.951294 containerd[1540]: 2025-05-15 16:29:24.877 [INFO][4087] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-a-855fb07f2a.novalocal' May 15 16:29:24.951294 containerd[1540]: 2025-05-15 16:29:24.880 [INFO][4087] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:24.951294 containerd[1540]: 2025-05-15 16:29:24.886 [INFO][4087] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:24.951294 containerd[1540]: 2025-05-15 16:29:24.894 [INFO][4087] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:24.951294 containerd[1540]: 2025-05-15 16:29:24.899 [INFO][4087] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:24.951294 containerd[1540]: 2025-05-15 16:29:24.902 [INFO][4087] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:24.951644 containerd[1540]: 2025-05-15 16:29:24.902 [INFO][4087] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:24.951644 containerd[1540]: 2025-05-15 16:29:24.905 [INFO][4087] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd May 15 16:29:24.951644 containerd[1540]: 2025-05-15 16:29:24.911 [INFO][4087] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:24.951644 containerd[1540]: 2025-05-15 16:29:24.919 [INFO][4087] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.130/26] block=192.168.53.128/26 handle="k8s-pod-network.df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:24.951644 containerd[1540]: 2025-05-15 16:29:24.919 [INFO][4087] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.130/26] handle="k8s-pod-network.df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:24.951644 containerd[1540]: 2025-05-15 16:29:24.919 [INFO][4087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 16:29:24.951644 containerd[1540]: 2025-05-15 16:29:24.919 [INFO][4087] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.130/26] IPv6=[] ContainerID="df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" HandleID="k8s-pod-network.df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0" May 15 16:29:24.951849 containerd[1540]: 2025-05-15 16:29:24.922 [INFO][4074] cni-plugin/k8s.go 386: Populated endpoint ContainerID="df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hd8" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"aeb574cf-384e-4f72-96fe-7c6174fca348", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-v9hd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36d0f681e47", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:24.951849 containerd[1540]: 2025-05-15 16:29:24.922 [INFO][4074] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.130/32] ContainerID="df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hd8" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0" May 15 16:29:24.951849 containerd[1540]: 2025-05-15 16:29:24.922 [INFO][4074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36d0f681e47 ContainerID="df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hd8" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0" May 15 16:29:24.951849 containerd[1540]: 2025-05-15 16:29:24.927 [INFO][4074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hd8" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0" May 15 16:29:24.951849 containerd[1540]: 2025-05-15 16:29:24.929 [INFO][4074] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hd8" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"aeb574cf-384e-4f72-96fe-7c6174fca348", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd", Pod:"coredns-6f6b679f8f-v9hd8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36d0f681e47", MAC:"5a:ab:7c:a6:48:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:24.951849 containerd[1540]: 2025-05-15 16:29:24.946 [INFO][4074] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9hd8" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--v9hd8-eth0" May 15 16:29:25.041912 containerd[1540]: time="2025-05-15T16:29:25.041617941Z" level=info msg="connecting to shim df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd" address="unix:///run/containerd/s/24b6b256917746afa15831f18d32378ff4708483fc68ae646b126ee81fdc37d9" namespace=k8s.io protocol=ttrpc version=3 May 15 16:29:25.140107 systemd[1]: Started cri-containerd-df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd.scope - libcontainer container df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd. May 15 16:29:25.233766 containerd[1540]: time="2025-05-15T16:29:25.233351449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9hd8,Uid:aeb574cf-384e-4f72-96fe-7c6174fca348,Namespace:kube-system,Attempt:0,} returns sandbox id \"df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd\"" May 15 16:29:25.239084 containerd[1540]: time="2025-05-15T16:29:25.238775344Z" level=info msg="CreateContainer within sandbox \"df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 16:29:25.253481 containerd[1540]: time="2025-05-15T16:29:25.253405292Z" level=info msg="Container 70bfe5cc51066bdf6869fac284ab93cce8e5dd5e5ec682b3753a7c59d2e9304d: CDI devices from CRI Config.CDIDevices: []" May 15 16:29:25.268659 containerd[1540]: time="2025-05-15T16:29:25.268612522Z" level=info msg="CreateContainer within sandbox \"df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70bfe5cc51066bdf6869fac284ab93cce8e5dd5e5ec682b3753a7c59d2e9304d\"" May 15 16:29:25.269480 containerd[1540]: time="2025-05-15T16:29:25.269460883Z" level=info msg="StartContainer for \"70bfe5cc51066bdf6869fac284ab93cce8e5dd5e5ec682b3753a7c59d2e9304d\"" May 15 16:29:25.270821 containerd[1540]: time="2025-05-15T16:29:25.270759630Z" level=info msg="connecting to shim 70bfe5cc51066bdf6869fac284ab93cce8e5dd5e5ec682b3753a7c59d2e9304d" address="unix:///run/containerd/s/24b6b256917746afa15831f18d32378ff4708483fc68ae646b126ee81fdc37d9" protocol=ttrpc version=3 May 15 16:29:25.299066 systemd[1]: Started cri-containerd-70bfe5cc51066bdf6869fac284ab93cce8e5dd5e5ec682b3753a7c59d2e9304d.scope - libcontainer container 70bfe5cc51066bdf6869fac284ab93cce8e5dd5e5ec682b3753a7c59d2e9304d. May 15 16:29:25.353100 containerd[1540]: time="2025-05-15T16:29:25.353049673Z" level=info msg="StartContainer for \"70bfe5cc51066bdf6869fac284ab93cce8e5dd5e5ec682b3753a7c59d2e9304d\" returns successfully" May 15 16:29:25.355967 containerd[1540]: time="2025-05-15T16:29:25.355921100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:25.357599 containerd[1540]: time="2025-05-15T16:29:25.357159464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 15 16:29:25.358653 containerd[1540]: time="2025-05-15T16:29:25.358520526Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:25.363576 containerd[1540]: time="2025-05-15T16:29:25.362931171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:25.364674 containerd[1540]: time="2025-05-15T16:29:25.364499544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.296178267s" May 15 16:29:25.364674 containerd[1540]: time="2025-05-15T16:29:25.364535010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 15 16:29:25.369042 containerd[1540]: time="2025-05-15T16:29:25.368163347Z" level=info msg="CreateContainer within sandbox \"6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 16:29:25.383934 containerd[1540]: time="2025-05-15T16:29:25.383888359Z" level=info msg="Container f573c1e452f80be11a700cde05c523a07acd9fbc699678b9b2c71042d834a06d: CDI devices from CRI Config.CDIDevices: []" May 15 16:29:25.403056 containerd[1540]: time="2025-05-15T16:29:25.402793486Z" level=info msg="CreateContainer within sandbox \"6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f573c1e452f80be11a700cde05c523a07acd9fbc699678b9b2c71042d834a06d\"" May 15 16:29:25.404739 containerd[1540]: time="2025-05-15T16:29:25.404676399Z" level=info msg="StartContainer for \"f573c1e452f80be11a700cde05c523a07acd9fbc699678b9b2c71042d834a06d\"" May 15 16:29:25.414350 containerd[1540]: time="2025-05-15T16:29:25.414268975Z" level=info msg="connecting to shim f573c1e452f80be11a700cde05c523a07acd9fbc699678b9b2c71042d834a06d" address="unix:///run/containerd/s/b08d676504fd6a1b861cb28c48981f4c3181c565e92a394fdfa614f959872090" protocol=ttrpc version=3 May 15 16:29:25.441071 systemd[1]: Started cri-containerd-f573c1e452f80be11a700cde05c523a07acd9fbc699678b9b2c71042d834a06d.scope - libcontainer container f573c1e452f80be11a700cde05c523a07acd9fbc699678b9b2c71042d834a06d. May 15 16:29:25.500163 containerd[1540]: time="2025-05-15T16:29:25.498738176Z" level=info msg="StartContainer for \"f573c1e452f80be11a700cde05c523a07acd9fbc699678b9b2c71042d834a06d\" returns successfully" May 15 16:29:25.502962 containerd[1540]: time="2025-05-15T16:29:25.502614880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 16:29:25.624222 systemd-networkd[1454]: vxlan.calico: Gained IPv6LL May 15 16:29:25.710024 containerd[1540]: time="2025-05-15T16:29:25.709858757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d5cbdb48-6t246,Uid:78f71923-9d79-4c68-9efd-57501e9f85c4,Namespace:calico-apiserver,Attempt:0,}" May 15 16:29:25.945752 systemd-networkd[1454]: cali88cda4a221d: Link UP May 15 16:29:25.947167 systemd-networkd[1454]: cali88cda4a221d: Gained carrier May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.811 [INFO][4223] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0 calico-apiserver-85d5cbdb48- calico-apiserver 78f71923-9d79-4c68-9efd-57501e9f85c4 721 0 2025-05-15 16:28:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85d5cbdb48 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4334-0-0-a-855fb07f2a.novalocal calico-apiserver-85d5cbdb48-6t246 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali88cda4a221d [] []}} ContainerID="985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-6t246" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.811 [INFO][4223] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-6t246" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.863 [INFO][4234] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" HandleID="k8s-pod-network.985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.881 [INFO][4234] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" HandleID="k8s-pod-network.985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027ef70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4334-0-0-a-855fb07f2a.novalocal", "pod":"calico-apiserver-85d5cbdb48-6t246", "timestamp":"2025-05-15 16:29:25.863299735 +0000 UTC"}, Hostname:"ci-4334-0-0-a-855fb07f2a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.881 [INFO][4234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.881 [INFO][4234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.881 [INFO][4234] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-a-855fb07f2a.novalocal' May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.885 [INFO][4234] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.893 [INFO][4234] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.901 [INFO][4234] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.904 [INFO][4234] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.907 [INFO][4234] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.908 [INFO][4234] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.910 [INFO][4234] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.929 [INFO][4234] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.938 [INFO][4234] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.131/26] block=192.168.53.128/26 handle="k8s-pod-network.985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.938 [INFO][4234] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.131/26] handle="k8s-pod-network.985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.938 [INFO][4234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 16:29:25.979945 containerd[1540]: 2025-05-15 16:29:25.938 [INFO][4234] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.131/26] IPv6=[] ContainerID="985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" HandleID="k8s-pod-network.985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0" May 15 16:29:25.981539 containerd[1540]: 2025-05-15 16:29:25.941 [INFO][4223] cni-plugin/k8s.go 386: Populated endpoint ContainerID="985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-6t246" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0", GenerateName:"calico-apiserver-85d5cbdb48-", Namespace:"calico-apiserver", SelfLink:"", UID:"78f71923-9d79-4c68-9efd-57501e9f85c4", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d5cbdb48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"", Pod:"calico-apiserver-85d5cbdb48-6t246", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88cda4a221d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:25.981539 containerd[1540]: 2025-05-15 16:29:25.941 [INFO][4223] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.131/32] ContainerID="985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-6t246" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0" May 15 16:29:25.981539 containerd[1540]: 2025-05-15 16:29:25.942 [INFO][4223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88cda4a221d ContainerID="985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-6t246" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0" May 15 16:29:25.981539 containerd[1540]: 2025-05-15 16:29:25.947 [INFO][4223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-6t246" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0" May 15 16:29:25.981539 containerd[1540]: 2025-05-15 16:29:25.948 [INFO][4223] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-6t246" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0", GenerateName:"calico-apiserver-85d5cbdb48-", Namespace:"calico-apiserver", SelfLink:"", UID:"78f71923-9d79-4c68-9efd-57501e9f85c4", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d5cbdb48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa", Pod:"calico-apiserver-85d5cbdb48-6t246", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali88cda4a221d", MAC:"1a:50:71:99:7f:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:25.981539 containerd[1540]: 2025-05-15 16:29:25.970 [INFO][4223] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-6t246" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--6t246-eth0" May 15 16:29:26.007511 kubelet[2786]: I0515 16:29:26.007239 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v9hd8" podStartSLOduration=38.007222025 podStartE2EDuration="38.007222025s" podCreationTimestamp="2025-05-15 16:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 16:29:26.006777019 +0000 UTC m=+43.414330707" watchObservedRunningTime="2025-05-15 16:29:26.007222025 +0000 UTC m=+43.414775712" May 15 16:29:26.042250 containerd[1540]: time="2025-05-15T16:29:26.042016631Z" level=info msg="connecting to shim 985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa" address="unix:///run/containerd/s/cc28e575c43c1e21b0abcae19a6ffe165302c359f52f0bba67b26a61297a337d" namespace=k8s.io protocol=ttrpc version=3 May 15 16:29:26.079085 systemd[1]: Started cri-containerd-985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa.scope - libcontainer container 985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa. May 15 16:29:26.150180 containerd[1540]: time="2025-05-15T16:29:26.150117584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d5cbdb48-6t246,Uid:78f71923-9d79-4c68-9efd-57501e9f85c4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa\"" May 15 16:29:26.710188 containerd[1540]: time="2025-05-15T16:29:26.710062066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d5cbdb48-m6cvl,Uid:12ca99da-51df-40ee-9827-8e6c699985d2,Namespace:calico-apiserver,Attempt:0,}" May 15 16:29:26.713258 containerd[1540]: time="2025-05-15T16:29:26.712993395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s5czs,Uid:f15c2804-51fb-4484-8678-65ec76ec5acb,Namespace:kube-system,Attempt:0,}" May 15 16:29:26.713593 containerd[1540]: time="2025-05-15T16:29:26.713406129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b79df777b-82jtl,Uid:1306ce75-0224-482a-9247-f27c93c10517,Namespace:calico-system,Attempt:0,}" May 15 16:29:26.777978 systemd-networkd[1454]: cali36d0f681e47: Gained IPv6LL May 15 16:29:27.043247 systemd-networkd[1454]: cali13e57603c17: Link UP May 15 16:29:27.043510 systemd-networkd[1454]: cali13e57603c17: Gained carrier May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:26.830 [INFO][4323] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0 calico-kube-controllers-7b79df777b- calico-system 1306ce75-0224-482a-9247-f27c93c10517 722 0 2025-05-15 16:28:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b79df777b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4334-0-0-a-855fb07f2a.novalocal calico-kube-controllers-7b79df777b-82jtl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali13e57603c17 [] []}} ContainerID="f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" Namespace="calico-system" Pod="calico-kube-controllers-7b79df777b-82jtl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:26.830 [INFO][4323] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" Namespace="calico-system" Pod="calico-kube-controllers-7b79df777b-82jtl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:26.942 [INFO][4346] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" HandleID="k8s-pod-network.f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:26.964 [INFO][4346] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" HandleID="k8s-pod-network.f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000512f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4334-0-0-a-855fb07f2a.novalocal", "pod":"calico-kube-controllers-7b79df777b-82jtl", "timestamp":"2025-05-15 16:29:26.942536849 +0000 UTC"}, Hostname:"ci-4334-0-0-a-855fb07f2a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:26.964 [INFO][4346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:26.965 [INFO][4346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:26.965 [INFO][4346] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-a-855fb07f2a.novalocal' May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:26.969 [INFO][4346] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:26.978 [INFO][4346] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:26.990 [INFO][4346] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:26.994 [INFO][4346] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:27.004 [INFO][4346] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:27.004 [INFO][4346] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:27.007 [INFO][4346] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:27.018 [INFO][4346] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:27.028 [INFO][4346] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.132/26] block=192.168.53.128/26 handle="k8s-pod-network.f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:27.029 [INFO][4346] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.132/26] handle="k8s-pod-network.f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:27.029 [INFO][4346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 16:29:27.062647 containerd[1540]: 2025-05-15 16:29:27.029 [INFO][4346] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.132/26] IPv6=[] ContainerID="f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" HandleID="k8s-pod-network.f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0" May 15 16:29:27.064668 containerd[1540]: 2025-05-15 16:29:27.034 [INFO][4323] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" Namespace="calico-system" Pod="calico-kube-controllers-7b79df777b-82jtl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0", GenerateName:"calico-kube-controllers-7b79df777b-", Namespace:"calico-system", SelfLink:"", UID:"1306ce75-0224-482a-9247-f27c93c10517", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b79df777b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"", Pod:"calico-kube-controllers-7b79df777b-82jtl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali13e57603c17", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:27.064668 containerd[1540]: 2025-05-15 16:29:27.034 [INFO][4323] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.132/32] ContainerID="f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" Namespace="calico-system" Pod="calico-kube-controllers-7b79df777b-82jtl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0" May 15 16:29:27.064668 containerd[1540]: 2025-05-15 16:29:27.035 [INFO][4323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13e57603c17 ContainerID="f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" Namespace="calico-system" Pod="calico-kube-controllers-7b79df777b-82jtl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0" May 15 16:29:27.064668 containerd[1540]: 2025-05-15 16:29:27.044 [INFO][4323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" Namespace="calico-system" Pod="calico-kube-controllers-7b79df777b-82jtl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0" May 15 16:29:27.064668 containerd[1540]: 2025-05-15 16:29:27.045 [INFO][4323] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" Namespace="calico-system" Pod="calico-kube-controllers-7b79df777b-82jtl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0", GenerateName:"calico-kube-controllers-7b79df777b-", Namespace:"calico-system", SelfLink:"", UID:"1306ce75-0224-482a-9247-f27c93c10517", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b79df777b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f", Pod:"calico-kube-controllers-7b79df777b-82jtl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali13e57603c17", MAC:"12:b9:f9:15:1b:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:27.064668 containerd[1540]: 2025-05-15 16:29:27.058 [INFO][4323] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" Namespace="calico-system" Pod="calico-kube-controllers-7b79df777b-82jtl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--kube--controllers--7b79df777b--82jtl-eth0" May 15 16:29:27.125601 containerd[1540]: time="2025-05-15T16:29:27.125545959Z" level=info msg="connecting to shim f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f" address="unix:///run/containerd/s/0dc414ced52da3425373247f1258c9a35c73991513ee89a0c17d5419c94c4d76" namespace=k8s.io protocol=ttrpc version=3 May 15 16:29:27.153030 systemd-networkd[1454]: cali5fc7f5370e0: Link UP May 15 16:29:27.154695 systemd-networkd[1454]: cali5fc7f5370e0: Gained carrier May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:26.858 [INFO][4322] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0 coredns-6f6b679f8f- kube-system f15c2804-51fb-4484-8678-65ec76ec5acb 714 0 2025-05-15 16:28:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4334-0-0-a-855fb07f2a.novalocal coredns-6f6b679f8f-s5czs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5fc7f5370e0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" Namespace="kube-system" Pod="coredns-6f6b679f8f-s5czs" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:26.858 [INFO][4322] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" Namespace="kube-system" Pod="coredns-6f6b679f8f-s5czs" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:26.957 [INFO][4351] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" HandleID="k8s-pod-network.7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:26.974 [INFO][4351] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" HandleID="k8s-pod-network.7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ad3e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4334-0-0-a-855fb07f2a.novalocal", "pod":"coredns-6f6b679f8f-s5czs", "timestamp":"2025-05-15 16:29:26.957551137 +0000 UTC"}, Hostname:"ci-4334-0-0-a-855fb07f2a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:26.974 [INFO][4351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.029 [INFO][4351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.029 [INFO][4351] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-a-855fb07f2a.novalocal' May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.069 [INFO][4351] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.076 [INFO][4351] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.094 [INFO][4351] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.098 [INFO][4351] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.107 [INFO][4351] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.109 [INFO][4351] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.114 [INFO][4351] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.125 [INFO][4351] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.138 [INFO][4351] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.133/26] block=192.168.53.128/26 handle="k8s-pod-network.7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.138 [INFO][4351] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.133/26] handle="k8s-pod-network.7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.138 [INFO][4351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 16:29:27.190741 containerd[1540]: 2025-05-15 16:29:27.139 [INFO][4351] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.133/26] IPv6=[] ContainerID="7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" HandleID="k8s-pod-network.7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0" May 15 16:29:27.192386 containerd[1540]: 2025-05-15 16:29:27.147 [INFO][4322] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" Namespace="kube-system" Pod="coredns-6f6b679f8f-s5czs" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f15c2804-51fb-4484-8678-65ec76ec5acb", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-s5czs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fc7f5370e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:27.192386 containerd[1540]: 2025-05-15 16:29:27.147 [INFO][4322] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.133/32] ContainerID="7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" Namespace="kube-system" Pod="coredns-6f6b679f8f-s5czs" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0" May 15 16:29:27.192386 containerd[1540]: 2025-05-15 16:29:27.147 [INFO][4322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fc7f5370e0 ContainerID="7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" Namespace="kube-system" Pod="coredns-6f6b679f8f-s5czs" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0" May 15 16:29:27.192386 containerd[1540]: 2025-05-15 16:29:27.153 [INFO][4322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" Namespace="kube-system" Pod="coredns-6f6b679f8f-s5czs" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0" May 15 16:29:27.192386 containerd[1540]: 2025-05-15 16:29:27.158 [INFO][4322] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" Namespace="kube-system" Pod="coredns-6f6b679f8f-s5czs" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f15c2804-51fb-4484-8678-65ec76ec5acb", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d", Pod:"coredns-6f6b679f8f-s5czs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5fc7f5370e0", MAC:"5e:da:77:68:3e:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:27.192386 containerd[1540]: 2025-05-15 16:29:27.182 [INFO][4322] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" Namespace="kube-system" Pod="coredns-6f6b679f8f-s5czs" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-coredns--6f6b679f8f--s5czs-eth0" May 15 16:29:27.201416 systemd[1]: Started cri-containerd-f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f.scope - libcontainer container f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f. May 15 16:29:27.255496 systemd-networkd[1454]: cali8796f364259: Link UP May 15 16:29:27.261088 systemd-networkd[1454]: cali8796f364259: Gained carrier May 15 16:29:27.275124 containerd[1540]: time="2025-05-15T16:29:27.275072613Z" level=info msg="connecting to shim 7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d" address="unix:///run/containerd/s/eb97b379c93c7f85040209c17fb42f9d494b83fcf994fbee74fd48e53d639ac8" namespace=k8s.io protocol=ttrpc version=3 May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:26.857 [INFO][4308] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0 calico-apiserver-85d5cbdb48- calico-apiserver 12ca99da-51df-40ee-9827-8e6c699985d2 718 0 2025-05-15 16:28:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85d5cbdb48 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4334-0-0-a-855fb07f2a.novalocal calico-apiserver-85d5cbdb48-m6cvl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8796f364259 [] []}} ContainerID="174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-m6cvl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:26.857 [INFO][4308] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-m6cvl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:26.980 [INFO][4353] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" HandleID="k8s-pod-network.174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.004 [INFO][4353] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" HandleID="k8s-pod-network.174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b5950), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4334-0-0-a-855fb07f2a.novalocal", "pod":"calico-apiserver-85d5cbdb48-m6cvl", "timestamp":"2025-05-15 16:29:26.980536869 +0000 UTC"}, Hostname:"ci-4334-0-0-a-855fb07f2a.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.004 [INFO][4353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.138 [INFO][4353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.139 [INFO][4353] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4334-0-0-a-855fb07f2a.novalocal' May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.171 [INFO][4353] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.180 [INFO][4353] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.196 [INFO][4353] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.203 [INFO][4353] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.208 [INFO][4353] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.208 [INFO][4353] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.211 [INFO][4353] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8 May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.220 [INFO][4353] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.240 [INFO][4353] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.134/26] block=192.168.53.128/26 handle="k8s-pod-network.174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.240 [INFO][4353] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.134/26] handle="k8s-pod-network.174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" host="ci-4334-0-0-a-855fb07f2a.novalocal" May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.240 [INFO][4353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 16:29:27.298774 containerd[1540]: 2025-05-15 16:29:27.240 [INFO][4353] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.134/26] IPv6=[] ContainerID="174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" HandleID="k8s-pod-network.174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" Workload="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0" May 15 16:29:27.299497 containerd[1540]: 2025-05-15 16:29:27.247 [INFO][4308] cni-plugin/k8s.go 386: Populated endpoint ContainerID="174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-m6cvl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0", GenerateName:"calico-apiserver-85d5cbdb48-", Namespace:"calico-apiserver", SelfLink:"", UID:"12ca99da-51df-40ee-9827-8e6c699985d2", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d5cbdb48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"", Pod:"calico-apiserver-85d5cbdb48-m6cvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8796f364259", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:27.299497 containerd[1540]: 2025-05-15 16:29:27.247 [INFO][4308] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.134/32] ContainerID="174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-m6cvl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0" May 15 16:29:27.299497 containerd[1540]: 2025-05-15 16:29:27.247 [INFO][4308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8796f364259 ContainerID="174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-m6cvl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0" May 15 16:29:27.299497 containerd[1540]: 2025-05-15 16:29:27.259 [INFO][4308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-m6cvl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0" May 15 16:29:27.299497 containerd[1540]: 2025-05-15 16:29:27.261 [INFO][4308] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-m6cvl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0", GenerateName:"calico-apiserver-85d5cbdb48-", Namespace:"calico-apiserver", SelfLink:"", UID:"12ca99da-51df-40ee-9827-8e6c699985d2", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 16, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d5cbdb48", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4334-0-0-a-855fb07f2a.novalocal", ContainerID:"174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8", Pod:"calico-apiserver-85d5cbdb48-m6cvl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8796f364259", MAC:"ae:5c:8b:ad:53:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 16:29:27.299497 containerd[1540]: 2025-05-15 16:29:27.292 [INFO][4308] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" Namespace="calico-apiserver" Pod="calico-apiserver-85d5cbdb48-m6cvl" WorkloadEndpoint="ci--4334--0--0--a--855fb07f2a.novalocal-k8s-calico--apiserver--85d5cbdb48--m6cvl-eth0" May 15 16:29:27.332139 systemd[1]: Started cri-containerd-7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d.scope - libcontainer container 7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d. May 15 16:29:27.375065 containerd[1540]: time="2025-05-15T16:29:27.374650162Z" level=info msg="connecting to shim 174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8" address="unix:///run/containerd/s/bcab8896a54130d42ac9de61e3f77379735153d836f09e406cecf898b9773b61" namespace=k8s.io protocol=ttrpc version=3 May 15 16:29:27.422100 systemd[1]: Started cri-containerd-174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8.scope - libcontainer container 174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8. May 15 16:29:27.454903 containerd[1540]: time="2025-05-15T16:29:27.454350150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s5czs,Uid:f15c2804-51fb-4484-8678-65ec76ec5acb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d\"" May 15 16:29:27.463438 containerd[1540]: time="2025-05-15T16:29:27.463087230Z" level=info msg="CreateContainer within sandbox \"7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 16:29:27.491181 containerd[1540]: time="2025-05-15T16:29:27.491139768Z" level=info msg="Container 4a80b3a2d5d5e6f7edd789862be6e51f642787760966e048117de6901b1cdc8b: CDI devices from CRI Config.CDIDevices: []" May 15 16:29:27.508316 containerd[1540]: time="2025-05-15T16:29:27.508215962Z" level=info msg="CreateContainer within sandbox \"7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a80b3a2d5d5e6f7edd789862be6e51f642787760966e048117de6901b1cdc8b\"" May 15 16:29:27.510901 containerd[1540]: time="2025-05-15T16:29:27.509034539Z" level=info msg="StartContainer for \"4a80b3a2d5d5e6f7edd789862be6e51f642787760966e048117de6901b1cdc8b\"" May 15 16:29:27.512776 containerd[1540]: time="2025-05-15T16:29:27.512731043Z" level=info msg="connecting to shim 4a80b3a2d5d5e6f7edd789862be6e51f642787760966e048117de6901b1cdc8b" address="unix:///run/containerd/s/eb97b379c93c7f85040209c17fb42f9d494b83fcf994fbee74fd48e53d639ac8" protocol=ttrpc version=3 May 15 16:29:27.531152 containerd[1540]: time="2025-05-15T16:29:27.531111675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b79df777b-82jtl,Uid:1306ce75-0224-482a-9247-f27c93c10517,Namespace:calico-system,Attempt:0,} returns sandbox id \"f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f\"" May 15 16:29:27.556332 systemd[1]: Started cri-containerd-4a80b3a2d5d5e6f7edd789862be6e51f642787760966e048117de6901b1cdc8b.scope - libcontainer container 4a80b3a2d5d5e6f7edd789862be6e51f642787760966e048117de6901b1cdc8b. May 15 16:29:27.566724 containerd[1540]: time="2025-05-15T16:29:27.566554857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d5cbdb48-m6cvl,Uid:12ca99da-51df-40ee-9827-8e6c699985d2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8\"" May 15 16:29:27.616654 containerd[1540]: time="2025-05-15T16:29:27.616521455Z" level=info msg="StartContainer for \"4a80b3a2d5d5e6f7edd789862be6e51f642787760966e048117de6901b1cdc8b\" returns successfully" May 15 16:29:27.928063 systemd-networkd[1454]: cali88cda4a221d: Gained IPv6LL May 15 16:29:28.043447 kubelet[2786]: I0515 16:29:28.043357 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-s5czs" podStartSLOduration=40.043201051 podStartE2EDuration="40.043201051s" podCreationTimestamp="2025-05-15 16:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 16:29:28.042074958 +0000 UTC m=+45.449628745" watchObservedRunningTime="2025-05-15 16:29:28.043201051 +0000 UTC m=+45.450754728" May 15 16:29:28.440483 systemd-networkd[1454]: cali5fc7f5370e0: Gained IPv6LL May 15 16:29:28.712740 containerd[1540]: time="2025-05-15T16:29:28.712190592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:28.715096 containerd[1540]: time="2025-05-15T16:29:28.714427939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 15 16:29:28.716630 containerd[1540]: time="2025-05-15T16:29:28.716594895Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:28.719617 containerd[1540]: time="2025-05-15T16:29:28.719571730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:28.720724 containerd[1540]: time="2025-05-15T16:29:28.720655522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 3.217997s" May 15 16:29:28.720724 containerd[1540]: time="2025-05-15T16:29:28.720722187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 15 16:29:28.722096 containerd[1540]: time="2025-05-15T16:29:28.722032396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 16:29:28.724942 containerd[1540]: time="2025-05-15T16:29:28.724504533Z" level=info msg="CreateContainer within sandbox \"6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 16:29:28.743227 containerd[1540]: time="2025-05-15T16:29:28.743147186Z" level=info msg="Container af17c03f4abaf78c6ce2d480fee745f73b91fb796267c9228ae951916f9e3c67: CDI devices from CRI Config.CDIDevices: []" May 15 16:29:28.751845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484905971.mount: Deactivated successfully. May 15 16:29:28.811143 containerd[1540]: time="2025-05-15T16:29:28.811017871Z" level=info msg="CreateContainer within sandbox \"6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"af17c03f4abaf78c6ce2d480fee745f73b91fb796267c9228ae951916f9e3c67\"" May 15 16:29:28.813962 containerd[1540]: time="2025-05-15T16:29:28.813906660Z" level=info msg="StartContainer for \"af17c03f4abaf78c6ce2d480fee745f73b91fb796267c9228ae951916f9e3c67\"" May 15 16:29:28.820784 containerd[1540]: time="2025-05-15T16:29:28.820224251Z" level=info msg="connecting to shim af17c03f4abaf78c6ce2d480fee745f73b91fb796267c9228ae951916f9e3c67" address="unix:///run/containerd/s/b08d676504fd6a1b861cb28c48981f4c3181c565e92a394fdfa614f959872090" protocol=ttrpc version=3 May 15 16:29:28.893157 systemd[1]: Started cri-containerd-af17c03f4abaf78c6ce2d480fee745f73b91fb796267c9228ae951916f9e3c67.scope - libcontainer container af17c03f4abaf78c6ce2d480fee745f73b91fb796267c9228ae951916f9e3c67. May 15 16:29:28.953565 systemd-networkd[1454]: cali13e57603c17: Gained IPv6LL May 15 16:29:28.957150 containerd[1540]: time="2025-05-15T16:29:28.957112381Z" level=info msg="StartContainer for \"af17c03f4abaf78c6ce2d480fee745f73b91fb796267c9228ae951916f9e3c67\" returns successfully" May 15 16:29:29.042980 kubelet[2786]: I0515 16:29:29.042570 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7nr2s" podStartSLOduration=27.388327842 podStartE2EDuration="33.042536363s" podCreationTimestamp="2025-05-15 16:28:56 +0000 UTC" firstStartedPulling="2025-05-15 16:29:23.067591218 +0000 UTC m=+40.475144895" lastFinishedPulling="2025-05-15 16:29:28.721799739 +0000 UTC m=+46.129353416" observedRunningTime="2025-05-15 16:29:29.040919941 +0000 UTC m=+46.448473618" watchObservedRunningTime="2025-05-15 16:29:29.042536363 +0000 UTC m=+46.450090040" May 15 16:29:29.208352 systemd-networkd[1454]: cali8796f364259: Gained IPv6LL May 15 16:29:29.825934 kubelet[2786]: I0515 16:29:29.824846 2786 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 16:29:29.825934 kubelet[2786]: I0515 16:29:29.824957 2786 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 16:29:32.180502 containerd[1540]: time="2025-05-15T16:29:32.180442915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"a9147908056d1be863ae49617ca8afa3faebeb698e9b902653f6e2c4c255fa79\" pid:4633 exited_at:{seconds:1747326572 nanos:180100783}" May 15 16:29:34.659015 containerd[1540]: time="2025-05-15T16:29:34.658610556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:34.660924 containerd[1540]: time="2025-05-15T16:29:34.660880434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 15 16:29:34.662188 containerd[1540]: time="2025-05-15T16:29:34.662150948Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:34.666657 containerd[1540]: time="2025-05-15T16:29:34.666106147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:34.667306 containerd[1540]: time="2025-05-15T16:29:34.667253870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 5.94515502s" May 15 16:29:34.667418 containerd[1540]: time="2025-05-15T16:29:34.667397058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 16:29:34.670225 containerd[1540]: time="2025-05-15T16:29:34.669390518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 16:29:34.670570 containerd[1540]: time="2025-05-15T16:29:34.670504728Z" level=info msg="CreateContainer within sandbox \"985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 16:29:34.686898 containerd[1540]: time="2025-05-15T16:29:34.685032259Z" level=info msg="Container 0844c672694440f7310a93ac7ff54c9eaad327d1e5f9c9f765845a4bf716280b: CDI devices from CRI Config.CDIDevices: []" May 15 16:29:34.702090 containerd[1540]: time="2025-05-15T16:29:34.702009947Z" level=info msg="CreateContainer within sandbox \"985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0844c672694440f7310a93ac7ff54c9eaad327d1e5f9c9f765845a4bf716280b\"" May 15 16:29:34.703547 containerd[1540]: time="2025-05-15T16:29:34.702681626Z" level=info msg="StartContainer for \"0844c672694440f7310a93ac7ff54c9eaad327d1e5f9c9f765845a4bf716280b\"" May 15 16:29:34.704129 containerd[1540]: time="2025-05-15T16:29:34.704080400Z" level=info msg="connecting to shim 0844c672694440f7310a93ac7ff54c9eaad327d1e5f9c9f765845a4bf716280b" address="unix:///run/containerd/s/cc28e575c43c1e21b0abcae19a6ffe165302c359f52f0bba67b26a61297a337d" protocol=ttrpc version=3 May 15 16:29:34.735003 systemd[1]: Started cri-containerd-0844c672694440f7310a93ac7ff54c9eaad327d1e5f9c9f765845a4bf716280b.scope - libcontainer container 0844c672694440f7310a93ac7ff54c9eaad327d1e5f9c9f765845a4bf716280b. May 15 16:29:34.794661 containerd[1540]: time="2025-05-15T16:29:34.794607173Z" level=info msg="StartContainer for \"0844c672694440f7310a93ac7ff54c9eaad327d1e5f9c9f765845a4bf716280b\" returns successfully" May 15 16:29:35.931471 kubelet[2786]: I0515 16:29:35.931384 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85d5cbdb48-6t246" podStartSLOduration=32.416039555 podStartE2EDuration="40.931369422s" podCreationTimestamp="2025-05-15 16:28:55 +0000 UTC" firstStartedPulling="2025-05-15 16:29:26.152841745 +0000 UTC m=+43.560395422" lastFinishedPulling="2025-05-15 16:29:34.668171612 +0000 UTC m=+52.075725289" observedRunningTime="2025-05-15 16:29:35.073107971 +0000 UTC m=+52.480661648" watchObservedRunningTime="2025-05-15 16:29:35.931369422 +0000 UTC m=+53.338923109" May 15 16:29:39.594927 containerd[1540]: time="2025-05-15T16:29:39.593840317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:39.595350 containerd[1540]: time="2025-05-15T16:29:39.595255922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 15 16:29:39.596663 containerd[1540]: time="2025-05-15T16:29:39.596628867Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:39.599404 containerd[1540]: time="2025-05-15T16:29:39.599368726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:39.600232 containerd[1540]: time="2025-05-15T16:29:39.600199274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 4.929778704s" May 15 16:29:39.600331 containerd[1540]: time="2025-05-15T16:29:39.600313057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 15 16:29:39.601994 containerd[1540]: time="2025-05-15T16:29:39.601791671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 16:29:39.632602 containerd[1540]: time="2025-05-15T16:29:39.632560175Z" level=info msg="CreateContainer within sandbox \"f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 16:29:39.649089 containerd[1540]: time="2025-05-15T16:29:39.649041820Z" level=info msg="Container 69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad: CDI devices from CRI Config.CDIDevices: []" May 15 16:29:39.667224 containerd[1540]: time="2025-05-15T16:29:39.667173520Z" level=info msg="CreateContainer within sandbox \"f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\"" May 15 16:29:39.668839 containerd[1540]: time="2025-05-15T16:29:39.667750061Z" level=info msg="StartContainer for \"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\"" May 15 16:29:39.670359 containerd[1540]: time="2025-05-15T16:29:39.670319992Z" level=info msg="connecting to shim 69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad" address="unix:///run/containerd/s/0dc414ced52da3425373247f1258c9a35c73991513ee89a0c17d5419c94c4d76" protocol=ttrpc version=3 May 15 16:29:39.702095 systemd[1]: Started cri-containerd-69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad.scope - libcontainer container 69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad. May 15 16:29:39.777049 containerd[1540]: time="2025-05-15T16:29:39.777015204Z" level=info msg="StartContainer for \"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" returns successfully" May 15 16:29:40.101303 kubelet[2786]: I0515 16:29:40.101197 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b79df777b-82jtl" podStartSLOduration=32.035619058 podStartE2EDuration="44.10115873s" podCreationTimestamp="2025-05-15 16:28:56 +0000 UTC" firstStartedPulling="2025-05-15 16:29:27.53555441 +0000 UTC m=+44.943108087" lastFinishedPulling="2025-05-15 16:29:39.601094082 +0000 UTC m=+57.008647759" observedRunningTime="2025-05-15 16:29:40.095513702 +0000 UTC m=+57.503067379" watchObservedRunningTime="2025-05-15 16:29:40.10115873 +0000 UTC m=+57.508712457" May 15 16:29:40.159678 containerd[1540]: time="2025-05-15T16:29:40.159592195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"c5e5548c68a704b66b259699d583ba330e1b6b58727713f8b3b5f012f44c0b94\" pid:4755 exited_at:{seconds:1747326580 nanos:159295178}" May 15 16:29:40.324120 containerd[1540]: time="2025-05-15T16:29:40.323990461Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 16:29:40.326728 containerd[1540]: time="2025-05-15T16:29:40.326563016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 16:29:40.337519 containerd[1540]: time="2025-05-15T16:29:40.337138503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 735.309101ms" May 15 16:29:40.338884 containerd[1540]: time="2025-05-15T16:29:40.338229709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 16:29:40.352862 containerd[1540]: time="2025-05-15T16:29:40.352315210Z" level=info msg="CreateContainer within sandbox \"174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 16:29:40.375974 containerd[1540]: time="2025-05-15T16:29:40.375800902Z" level=info msg="Container 613fb8bebe20c5609f70dada63171b62e3a180a07ddb0ea84f994ce4d06a8f4d: CDI devices from CRI Config.CDIDevices: []" May 15 16:29:40.398823 containerd[1540]: time="2025-05-15T16:29:40.398743897Z" level=info msg="CreateContainer within sandbox \"174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"613fb8bebe20c5609f70dada63171b62e3a180a07ddb0ea84f994ce4d06a8f4d\"" May 15 16:29:40.400087 containerd[1540]: time="2025-05-15T16:29:40.400040710Z" level=info msg="StartContainer for \"613fb8bebe20c5609f70dada63171b62e3a180a07ddb0ea84f994ce4d06a8f4d\"" May 15 16:29:40.403216 containerd[1540]: time="2025-05-15T16:29:40.403122279Z" level=info msg="connecting to shim 613fb8bebe20c5609f70dada63171b62e3a180a07ddb0ea84f994ce4d06a8f4d" address="unix:///run/containerd/s/bcab8896a54130d42ac9de61e3f77379735153d836f09e406cecf898b9773b61" protocol=ttrpc version=3 May 15 16:29:40.435331 systemd[1]: Started cri-containerd-613fb8bebe20c5609f70dada63171b62e3a180a07ddb0ea84f994ce4d06a8f4d.scope - libcontainer container 613fb8bebe20c5609f70dada63171b62e3a180a07ddb0ea84f994ce4d06a8f4d. May 15 16:29:40.521759 containerd[1540]: time="2025-05-15T16:29:40.521434160Z" level=info msg="StartContainer for \"613fb8bebe20c5609f70dada63171b62e3a180a07ddb0ea84f994ce4d06a8f4d\" returns successfully" May 15 16:29:40.632656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3843923827.mount: Deactivated successfully. May 15 16:29:41.687686 kubelet[2786]: I0515 16:29:41.687610 2786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85d5cbdb48-m6cvl" podStartSLOduration=33.914868018 podStartE2EDuration="46.687590705s" podCreationTimestamp="2025-05-15 16:28:55 +0000 UTC" firstStartedPulling="2025-05-15 16:29:27.569925911 +0000 UTC m=+44.977479588" lastFinishedPulling="2025-05-15 16:29:40.342648568 +0000 UTC m=+57.750202275" observedRunningTime="2025-05-15 16:29:41.10605291 +0000 UTC m=+58.513606587" watchObservedRunningTime="2025-05-15 16:29:41.687590705 +0000 UTC m=+59.095144382" May 15 16:29:43.454182 containerd[1540]: time="2025-05-15T16:29:43.454106047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"a4270ced200ec2558f2cccb6b0e7eea30180bf041fd08f1422c5e2a2e5442a96\" pid:4816 exited_at:{seconds:1747326583 nanos:453534034}" May 15 16:30:02.221961 containerd[1540]: time="2025-05-15T16:30:02.221756759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"7c55b6b71074024b1d17b0a73449bcfcd788dd5d6de0f624493cd77c2b35dde2\" pid:4848 exited_at:{seconds:1747326602 nanos:221246556}" May 15 16:30:13.409609 containerd[1540]: time="2025-05-15T16:30:13.409488184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"87196fd8fec771f6e234b1375bbe70b3a7048bf19826fa8f443fd3c72984dfd6\" pid:4879 exited_at:{seconds:1747326613 nanos:409270794}" May 15 16:30:20.974015 containerd[1540]: time="2025-05-15T16:30:20.973759109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"a7334446505856c70670e2255d93bc4cec7b49e756aa034917c35b7b2bac237e\" pid:4902 exited_at:{seconds:1747326620 nanos:973188816}" May 15 16:30:32.210511 containerd[1540]: time="2025-05-15T16:30:32.210405334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"7c263762d6b911e0d0da6652f129de2bf41ba90a3eba17acc500b15bdf9edab1\" pid:4927 exited_at:{seconds:1747326632 nanos:209120975}" May 15 16:30:43.449505 containerd[1540]: time="2025-05-15T16:30:43.449355556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"17c6afd7113859f8714c4bfce32f7ff1d702ea7a6be3949d4612597e8570fbb7\" pid:4953 exited_at:{seconds:1747326643 nanos:447531207}" May 15 16:31:02.240744 containerd[1540]: time="2025-05-15T16:31:02.240668759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"c73ebffec9565c1384b75d06d0796b96151297c6dbfd0827b04ef1eabe69029a\" pid:5002 exited_at:{seconds:1747326662 nanos:238208165}" May 15 16:31:13.466890 containerd[1540]: time="2025-05-15T16:31:13.466579460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"ca9445d1eb4c1919b518af617ce39241c5bfe281d84ba71f5414822687d21965\" pid:5026 exited_at:{seconds:1747326673 nanos:466304858}" May 15 16:31:20.977592 containerd[1540]: time="2025-05-15T16:31:20.977434730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"d0ccb9d496759ca58c5dbb05892a67e91373b3b703b3553211587c8c2f5e9bca\" pid:5050 exited_at:{seconds:1747326680 nanos:976682016}" May 15 16:31:32.249305 containerd[1540]: time="2025-05-15T16:31:32.249201050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"66ef7125c3f2a30b2a93ff360ca20a2c940ad6f08cc79547bb4fd8c199c40aba\" pid:5074 exited_at:{seconds:1747326692 nanos:248756661}" May 15 16:31:43.478933 containerd[1540]: time="2025-05-15T16:31:43.478775848Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"a34841e104b2229919f875e3a3aea39de545f7225240a4da826df91e281b31c4\" pid:5099 exited_at:{seconds:1747326703 nanos:478241571}" May 15 16:32:02.246037 containerd[1540]: time="2025-05-15T16:32:02.245525828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"1dfd88af2a0f66a7dd7af440c9992db4f8f622983390f59f7adf3580fc0e8dcc\" pid:5125 exited_at:{seconds:1747326722 nanos:244476049}" May 15 16:32:13.492597 containerd[1540]: time="2025-05-15T16:32:13.492526113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"ba68666491be6f826a75ff19731ab521ffd99e4660f996589b66eb748afcf711\" pid:5159 exited_at:{seconds:1747326733 nanos:492112548}" May 15 16:32:21.008784 containerd[1540]: time="2025-05-15T16:32:21.008163180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"b819c5a13483cc789abc400374fe5ef35041a67e1993c1224498294d3451a7d3\" pid:5185 exited_at:{seconds:1747326741 nanos:6940745}" May 15 16:32:32.224422 containerd[1540]: time="2025-05-15T16:32:32.224317752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"3e55ddaf6cef21653b92552c25289bef0a429e6006c6d3548420ef52986be8ff\" pid:5208 exited_at:{seconds:1747326752 nanos:222521182}" May 15 16:32:43.269247 systemd[1]: Started sshd@9-172.24.4.121:22-172.24.4.1:35212.service - OpenSSH per-connection server daemon (172.24.4.1:35212). May 15 16:32:43.447939 containerd[1540]: time="2025-05-15T16:32:43.447739485Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"c7ec6b834ae2597626481e333fd48a0eab8fca03ab0b775e6100f9874df45693\" pid:5260 exited_at:{seconds:1747326763 nanos:447076896}" May 15 16:32:44.419685 sshd[5244]: Accepted publickey for core from 172.24.4.1 port 35212 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:32:44.424582 sshd-session[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:32:44.446528 systemd-logind[1497]: New session 12 of user core. May 15 16:32:44.459241 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 16:32:45.286660 sshd[5272]: Connection closed by 172.24.4.1 port 35212 May 15 16:32:45.287608 sshd-session[5244]: pam_unix(sshd:session): session closed for user core May 15 16:32:45.296715 systemd[1]: sshd@9-172.24.4.121:22-172.24.4.1:35212.service: Deactivated successfully. May 15 16:32:45.301907 systemd[1]: session-12.scope: Deactivated successfully. May 15 16:32:45.304320 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. May 15 16:32:45.308547 systemd-logind[1497]: Removed session 12. May 15 16:32:50.320120 systemd[1]: Started sshd@10-172.24.4.121:22-172.24.4.1:46408.service - OpenSSH per-connection server daemon (172.24.4.1:46408). May 15 16:32:51.771446 sshd[5288]: Accepted publickey for core from 172.24.4.1 port 46408 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:32:51.777082 sshd-session[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:32:51.798071 systemd-logind[1497]: New session 13 of user core. May 15 16:32:51.806286 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 16:32:52.637135 sshd[5290]: Connection closed by 172.24.4.1 port 46408 May 15 16:32:52.637340 sshd-session[5288]: pam_unix(sshd:session): session closed for user core May 15 16:32:52.646769 systemd[1]: sshd@10-172.24.4.121:22-172.24.4.1:46408.service: Deactivated successfully. May 15 16:32:52.654720 systemd[1]: session-13.scope: Deactivated successfully. May 15 16:32:52.660747 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. May 15 16:32:52.665478 systemd-logind[1497]: Removed session 13. May 15 16:32:57.671645 systemd[1]: Started sshd@11-172.24.4.121:22-172.24.4.1:52144.service - OpenSSH per-connection server daemon (172.24.4.1:52144). May 15 16:32:58.814976 sshd[5303]: Accepted publickey for core from 172.24.4.1 port 52144 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:32:58.818948 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:32:58.838469 systemd-logind[1497]: New session 14 of user core. May 15 16:32:58.851299 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 16:32:59.584176 sshd[5305]: Connection closed by 172.24.4.1 port 52144 May 15 16:32:59.585224 sshd-session[5303]: pam_unix(sshd:session): session closed for user core May 15 16:32:59.594042 systemd[1]: sshd@11-172.24.4.121:22-172.24.4.1:52144.service: Deactivated successfully. May 15 16:32:59.596278 systemd[1]: session-14.scope: Deactivated successfully. May 15 16:32:59.601034 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. May 15 16:32:59.603849 systemd[1]: Started sshd@12-172.24.4.121:22-172.24.4.1:52160.service - OpenSSH per-connection server daemon (172.24.4.1:52160). May 15 16:32:59.607258 systemd-logind[1497]: Removed session 14. May 15 16:33:00.813570 sshd[5318]: Accepted publickey for core from 172.24.4.1 port 52160 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:00.817561 sshd-session[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:00.835987 systemd-logind[1497]: New session 15 of user core. May 15 16:33:00.845501 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 16:33:01.798169 sshd[5321]: Connection closed by 172.24.4.1 port 52160 May 15 16:33:01.798037 sshd-session[5318]: pam_unix(sshd:session): session closed for user core May 15 16:33:01.823427 systemd[1]: sshd@12-172.24.4.121:22-172.24.4.1:52160.service: Deactivated successfully. May 15 16:33:01.829446 systemd[1]: session-15.scope: Deactivated successfully. May 15 16:33:01.832805 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. May 15 16:33:01.843548 systemd[1]: Started sshd@13-172.24.4.121:22-172.24.4.1:52170.service - OpenSSH per-connection server daemon (172.24.4.1:52170). May 15 16:33:01.848043 systemd-logind[1497]: Removed session 15. May 15 16:33:02.186686 containerd[1540]: time="2025-05-15T16:33:02.186448400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"b19668f8d946a53c9c0f831043bee3f4b9fd5f08b6fe665940546ba1d82a6b49\" pid:5346 exited_at:{seconds:1747326782 nanos:184089176}" May 15 16:33:03.215072 sshd[5331]: Accepted publickey for core from 172.24.4.1 port 52170 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:03.217975 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:03.231020 systemd-logind[1497]: New session 16 of user core. May 15 16:33:03.247272 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 16:33:03.944932 sshd[5357]: Connection closed by 172.24.4.1 port 52170 May 15 16:33:03.946281 sshd-session[5331]: pam_unix(sshd:session): session closed for user core May 15 16:33:03.952250 systemd[1]: sshd@13-172.24.4.121:22-172.24.4.1:52170.service: Deactivated successfully. May 15 16:33:03.954645 systemd[1]: session-16.scope: Deactivated successfully. May 15 16:33:03.956678 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. May 15 16:33:03.960107 systemd-logind[1497]: Removed session 16. May 15 16:33:08.966131 systemd[1]: Started sshd@14-172.24.4.121:22-172.24.4.1:56952.service - OpenSSH per-connection server daemon (172.24.4.1:56952). May 15 16:33:09.922391 sshd[5370]: Accepted publickey for core from 172.24.4.1 port 56952 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:09.925566 sshd-session[5370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:09.938770 systemd-logind[1497]: New session 17 of user core. May 15 16:33:09.959223 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 16:33:10.518069 sshd[5376]: Connection closed by 172.24.4.1 port 56952 May 15 16:33:10.519621 sshd-session[5370]: pam_unix(sshd:session): session closed for user core May 15 16:33:10.530067 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. May 15 16:33:10.532343 systemd[1]: sshd@14-172.24.4.121:22-172.24.4.1:56952.service: Deactivated successfully. May 15 16:33:10.539456 systemd[1]: session-17.scope: Deactivated successfully. May 15 16:33:10.545988 systemd-logind[1497]: Removed session 17. May 15 16:33:13.467494 containerd[1540]: time="2025-05-15T16:33:13.467439508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"b803598a629aad599eb6d4b689057b10afd90269fc197de4432a32b87f6b5150\" pid:5399 exited_at:{seconds:1747326793 nanos:467025958}" May 15 16:33:15.540714 systemd[1]: Started sshd@15-172.24.4.121:22-172.24.4.1:59590.service - OpenSSH per-connection server daemon (172.24.4.1:59590). May 15 16:33:16.937013 sshd[5409]: Accepted publickey for core from 172.24.4.1 port 59590 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:16.940513 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:16.956658 systemd-logind[1497]: New session 18 of user core. May 15 16:33:16.972291 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 16:33:17.794924 sshd[5411]: Connection closed by 172.24.4.1 port 59590 May 15 16:33:17.794611 sshd-session[5409]: pam_unix(sshd:session): session closed for user core May 15 16:33:17.801301 systemd[1]: sshd@15-172.24.4.121:22-172.24.4.1:59590.service: Deactivated successfully. May 15 16:33:17.804391 systemd[1]: session-18.scope: Deactivated successfully. May 15 16:33:17.806632 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. May 15 16:33:17.808843 systemd-logind[1497]: Removed session 18. May 15 16:33:20.965783 containerd[1540]: time="2025-05-15T16:33:20.965498103Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"b7e32499244e1d70f4d1b9aa12ee8ae6a771a31d6f699c1d171bd923e0b4416e\" pid:5435 exited_at:{seconds:1747326800 nanos:964978085}" May 15 16:33:22.844757 systemd[1]: Started sshd@16-172.24.4.121:22-172.24.4.1:59596.service - OpenSSH per-connection server daemon (172.24.4.1:59596). May 15 16:33:24.086199 sshd[5445]: Accepted publickey for core from 172.24.4.1 port 59596 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:24.090326 sshd-session[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:24.109026 systemd-logind[1497]: New session 19 of user core. May 15 16:33:24.123384 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 16:33:24.986685 sshd[5447]: Connection closed by 172.24.4.1 port 59596 May 15 16:33:24.989255 sshd-session[5445]: pam_unix(sshd:session): session closed for user core May 15 16:33:24.997190 systemd[1]: sshd@16-172.24.4.121:22-172.24.4.1:59596.service: Deactivated successfully. May 15 16:33:24.999590 systemd[1]: session-19.scope: Deactivated successfully. May 15 16:33:25.003434 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. May 15 16:33:25.006476 systemd[1]: Started sshd@17-172.24.4.121:22-172.24.4.1:51602.service - OpenSSH per-connection server daemon (172.24.4.1:51602). May 15 16:33:25.013242 systemd-logind[1497]: Removed session 19. May 15 16:33:26.190961 sshd[5458]: Accepted publickey for core from 172.24.4.1 port 51602 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:26.194759 sshd-session[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:26.212984 systemd-logind[1497]: New session 20 of user core. May 15 16:33:26.222263 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 16:33:27.294673 sshd[5460]: Connection closed by 172.24.4.1 port 51602 May 15 16:33:27.294285 sshd-session[5458]: pam_unix(sshd:session): session closed for user core May 15 16:33:27.315627 systemd[1]: sshd@17-172.24.4.121:22-172.24.4.1:51602.service: Deactivated successfully. May 15 16:33:27.320916 systemd[1]: session-20.scope: Deactivated successfully. May 15 16:33:27.325047 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. May 15 16:33:27.332506 systemd[1]: Started sshd@18-172.24.4.121:22-172.24.4.1:51606.service - OpenSSH per-connection server daemon (172.24.4.1:51606). May 15 16:33:27.335134 systemd-logind[1497]: Removed session 20. May 15 16:33:28.646963 sshd[5470]: Accepted publickey for core from 172.24.4.1 port 51606 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:28.649589 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:28.667009 systemd-logind[1497]: New session 21 of user core. May 15 16:33:28.677268 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 16:33:32.359334 containerd[1540]: time="2025-05-15T16:33:32.359043891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"29ccf0a0b70bf02aaeab37870c2042eff688573ce9ba2303914df9f0316c52e2\" pid:5496 exited_at:{seconds:1747326812 nanos:356781510}" May 15 16:33:32.407351 sshd[5473]: Connection closed by 172.24.4.1 port 51606 May 15 16:33:32.408037 sshd-session[5470]: pam_unix(sshd:session): session closed for user core May 15 16:33:32.424475 systemd[1]: sshd@18-172.24.4.121:22-172.24.4.1:51606.service: Deactivated successfully. May 15 16:33:32.427282 systemd[1]: session-21.scope: Deactivated successfully. May 15 16:33:32.427606 systemd[1]: session-21.scope: Consumed 956ms CPU time, 70M memory peak. May 15 16:33:32.430001 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. May 15 16:33:32.434295 systemd[1]: Started sshd@19-172.24.4.121:22-172.24.4.1:51620.service - OpenSSH per-connection server daemon (172.24.4.1:51620). May 15 16:33:32.436070 systemd-logind[1497]: Removed session 21. May 15 16:33:33.741117 sshd[5516]: Accepted publickey for core from 172.24.4.1 port 51620 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:33.744437 sshd-session[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:33.763961 systemd-logind[1497]: New session 22 of user core. May 15 16:33:33.773310 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 16:33:34.868934 sshd[5518]: Connection closed by 172.24.4.1 port 51620 May 15 16:33:34.870655 sshd-session[5516]: pam_unix(sshd:session): session closed for user core May 15 16:33:34.886763 systemd[1]: sshd@19-172.24.4.121:22-172.24.4.1:51620.service: Deactivated successfully. May 15 16:33:34.892852 systemd[1]: session-22.scope: Deactivated successfully. May 15 16:33:34.897345 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. May 15 16:33:34.908005 systemd[1]: Started sshd@20-172.24.4.121:22-172.24.4.1:50932.service - OpenSSH per-connection server daemon (172.24.4.1:50932). May 15 16:33:34.913288 systemd-logind[1497]: Removed session 22. May 15 16:33:36.043140 sshd[5528]: Accepted publickey for core from 172.24.4.1 port 50932 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:36.046592 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:36.057844 systemd-logind[1497]: New session 23 of user core. May 15 16:33:36.075391 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 16:33:36.942655 sshd[5530]: Connection closed by 172.24.4.1 port 50932 May 15 16:33:36.944358 sshd-session[5528]: pam_unix(sshd:session): session closed for user core May 15 16:33:36.953048 systemd[1]: sshd@20-172.24.4.121:22-172.24.4.1:50932.service: Deactivated successfully. May 15 16:33:36.960167 systemd[1]: session-23.scope: Deactivated successfully. May 15 16:33:36.963949 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. May 15 16:33:36.968651 systemd-logind[1497]: Removed session 23. May 15 16:33:37.108387 containerd[1540]: time="2025-05-15T16:33:37.108030796Z" level=warning msg="container event discarded" container=4a152f4a384816839894b83ac950e3b410d72337f826a3a7025060c5bcfb6c64 type=CONTAINER_CREATED_EVENT May 15 16:33:37.109312 containerd[1540]: time="2025-05-15T16:33:37.109216122Z" level=warning msg="container event discarded" container=4a152f4a384816839894b83ac950e3b410d72337f826a3a7025060c5bcfb6c64 type=CONTAINER_STARTED_EVENT May 15 16:33:37.150733 containerd[1540]: time="2025-05-15T16:33:37.150596083Z" level=warning msg="container event discarded" container=e3899e43b37ece2dd7a191065501a2e6f4b785a4791e3a97a56937236e4f5658 type=CONTAINER_CREATED_EVENT May 15 16:33:37.150733 containerd[1540]: time="2025-05-15T16:33:37.150686121Z" level=warning msg="container event discarded" container=e3899e43b37ece2dd7a191065501a2e6f4b785a4791e3a97a56937236e4f5658 type=CONTAINER_STARTED_EVENT May 15 16:33:37.173591 containerd[1540]: time="2025-05-15T16:33:37.173376280Z" level=warning msg="container event discarded" container=9eb886cf0ee6f03ab21915d5edbea404d5f3dc0a9fc9b30753374985c43a1d6f type=CONTAINER_CREATED_EVENT May 15 16:33:37.173591 containerd[1540]: time="2025-05-15T16:33:37.173468873Z" level=warning msg="container event discarded" container=9eb886cf0ee6f03ab21915d5edbea404d5f3dc0a9fc9b30753374985c43a1d6f type=CONTAINER_STARTED_EVENT May 15 16:33:37.173591 containerd[1540]: time="2025-05-15T16:33:37.173531610Z" level=warning msg="container event discarded" container=1c1e0b94033ff727535b70477e34fea60aa0319a51699707f4d80c7ef0e43eaa type=CONTAINER_CREATED_EVENT May 15 16:33:37.212206 containerd[1540]: time="2025-05-15T16:33:37.210811266Z" level=warning msg="container event discarded" container=6dde7add2b5407cc1cc2dccad70b03dc94d3442850302f75505c23e64050db8c type=CONTAINER_CREATED_EVENT May 15 16:33:37.228686 containerd[1540]: time="2025-05-15T16:33:37.228536043Z" level=warning msg="container event discarded" container=b878f401a54e97fa6b11b3ab13186a732b09bac37bf0df076eafde812d63d4e4 type=CONTAINER_CREATED_EVENT May 15 16:33:37.334270 containerd[1540]: time="2025-05-15T16:33:37.334119873Z" level=warning msg="container event discarded" container=1c1e0b94033ff727535b70477e34fea60aa0319a51699707f4d80c7ef0e43eaa type=CONTAINER_STARTED_EVENT May 15 16:33:37.349125 containerd[1540]: time="2025-05-15T16:33:37.348988261Z" level=warning msg="container event discarded" container=6dde7add2b5407cc1cc2dccad70b03dc94d3442850302f75505c23e64050db8c type=CONTAINER_STARTED_EVENT May 15 16:33:37.411391 containerd[1540]: time="2025-05-15T16:33:37.411277566Z" level=warning msg="container event discarded" container=b878f401a54e97fa6b11b3ab13186a732b09bac37bf0df076eafde812d63d4e4 type=CONTAINER_STARTED_EVENT May 15 16:33:41.965725 systemd[1]: Started sshd@21-172.24.4.121:22-172.24.4.1:50936.service - OpenSSH per-connection server daemon (172.24.4.1:50936). May 15 16:33:43.151699 sshd[5545]: Accepted publickey for core from 172.24.4.1 port 50936 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:43.156166 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:43.168859 systemd-logind[1497]: New session 24 of user core. May 15 16:33:43.183220 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 16:33:43.480501 containerd[1540]: time="2025-05-15T16:33:43.480049917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"6a7d20659d234b6e6fc46d3da5e4e1b32bf0381bfa095149d3745f2701d26ffc\" pid:5561 exited_at:{seconds:1747326823 nanos:478446982}" May 15 16:33:43.954604 sshd[5549]: Connection closed by 172.24.4.1 port 50936 May 15 16:33:43.954425 sshd-session[5545]: pam_unix(sshd:session): session closed for user core May 15 16:33:43.964793 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. May 15 16:33:43.967181 systemd[1]: sshd@21-172.24.4.121:22-172.24.4.1:50936.service: Deactivated successfully. May 15 16:33:43.973391 systemd[1]: session-24.scope: Deactivated successfully. May 15 16:33:43.977795 systemd-logind[1497]: Removed session 24. May 15 16:33:48.979581 systemd[1]: Started sshd@22-172.24.4.121:22-172.24.4.1:53038.service - OpenSSH per-connection server daemon (172.24.4.1:53038). May 15 16:33:49.303432 containerd[1540]: time="2025-05-15T16:33:49.302822242Z" level=warning msg="container event discarded" container=81c3514bb564cff936e8a23f16f0a9810cec44d471946b6597fbf40ab6bfcdc7 type=CONTAINER_CREATED_EVENT May 15 16:33:49.304808 containerd[1540]: time="2025-05-15T16:33:49.304003282Z" level=warning msg="container event discarded" container=81c3514bb564cff936e8a23f16f0a9810cec44d471946b6597fbf40ab6bfcdc7 type=CONTAINER_STARTED_EVENT May 15 16:33:49.343776 containerd[1540]: time="2025-05-15T16:33:49.343606934Z" level=warning msg="container event discarded" container=b025f975438de3a11681419e7daf2f5df2903d37464e3c5ed5d39c4515165168 type=CONTAINER_CREATED_EVENT May 15 16:33:49.343986 containerd[1540]: time="2025-05-15T16:33:49.343782502Z" level=warning msg="container event discarded" container=b025f975438de3a11681419e7daf2f5df2903d37464e3c5ed5d39c4515165168 type=CONTAINER_STARTED_EVENT May 15 16:33:49.343986 containerd[1540]: time="2025-05-15T16:33:49.343810864Z" level=warning msg="container event discarded" container=9c866e3cfbd414415d1d13bc7a86ea6455846a77d5ad9b1741a6e76e0e7f94f4 type=CONTAINER_CREATED_EVENT May 15 16:33:49.434680 containerd[1540]: time="2025-05-15T16:33:49.434529627Z" level=warning msg="container event discarded" container=9c866e3cfbd414415d1d13bc7a86ea6455846a77d5ad9b1741a6e76e0e7f94f4 type=CONTAINER_STARTED_EVENT May 15 16:33:50.353070 sshd[5582]: Accepted publickey for core from 172.24.4.1 port 53038 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:50.358370 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:50.383989 systemd-logind[1497]: New session 25 of user core. May 15 16:33:50.396463 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 16:33:51.202068 sshd[5586]: Connection closed by 172.24.4.1 port 53038 May 15 16:33:51.204563 sshd-session[5582]: pam_unix(sshd:session): session closed for user core May 15 16:33:51.215531 systemd[1]: sshd@22-172.24.4.121:22-172.24.4.1:53038.service: Deactivated successfully. May 15 16:33:51.223467 systemd[1]: session-25.scope: Deactivated successfully. May 15 16:33:51.228814 systemd-logind[1497]: Session 25 logged out. Waiting for processes to exit. May 15 16:33:51.232138 systemd-logind[1497]: Removed session 25. May 15 16:33:52.498975 containerd[1540]: time="2025-05-15T16:33:52.498566428Z" level=warning msg="container event discarded" container=20f59cb2f72a89fb8b92fbf193f6a1790d26a1505a54fbc414ed56bfdfa7113c type=CONTAINER_CREATED_EVENT May 15 16:33:52.587269 containerd[1540]: time="2025-05-15T16:33:52.587115485Z" level=warning msg="container event discarded" container=20f59cb2f72a89fb8b92fbf193f6a1790d26a1505a54fbc414ed56bfdfa7113c type=CONTAINER_STARTED_EVENT May 15 16:33:56.229474 systemd[1]: Started sshd@23-172.24.4.121:22-172.24.4.1:46932.service - OpenSSH per-connection server daemon (172.24.4.1:46932). May 15 16:33:56.543133 containerd[1540]: time="2025-05-15T16:33:56.542729649Z" level=warning msg="container event discarded" container=94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1 type=CONTAINER_CREATED_EVENT May 15 16:33:56.543133 containerd[1540]: time="2025-05-15T16:33:56.542833441Z" level=warning msg="container event discarded" container=94338eb06c9902333709c6e13310ff02437b234bf82e61825059e2275a35efe1 type=CONTAINER_STARTED_EVENT May 15 16:33:56.567440 containerd[1540]: time="2025-05-15T16:33:56.567179149Z" level=warning msg="container event discarded" container=489724a1091d53f7f52483284b5986d0052265de80cc0fa21b0b32e1c2f4239c type=CONTAINER_CREATED_EVENT May 15 16:33:56.567440 containerd[1540]: time="2025-05-15T16:33:56.567384952Z" level=warning msg="container event discarded" container=489724a1091d53f7f52483284b5986d0052265de80cc0fa21b0b32e1c2f4239c type=CONTAINER_STARTED_EVENT May 15 16:33:57.539182 sshd[5598]: Accepted publickey for core from 172.24.4.1 port 46932 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:33:57.543423 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:33:57.556308 systemd-logind[1497]: New session 26 of user core. May 15 16:33:57.578234 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 16:33:58.396944 sshd[5600]: Connection closed by 172.24.4.1 port 46932 May 15 16:33:58.398202 sshd-session[5598]: pam_unix(sshd:session): session closed for user core May 15 16:33:58.407058 systemd[1]: sshd@23-172.24.4.121:22-172.24.4.1:46932.service: Deactivated successfully. May 15 16:33:58.414356 systemd[1]: session-26.scope: Deactivated successfully. May 15 16:33:58.418615 systemd-logind[1497]: Session 26 logged out. Waiting for processes to exit. May 15 16:33:58.423710 systemd-logind[1497]: Removed session 26. May 15 16:33:58.711275 containerd[1540]: time="2025-05-15T16:33:58.711014701Z" level=warning msg="container event discarded" container=b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c type=CONTAINER_CREATED_EVENT May 15 16:33:58.785949 containerd[1540]: time="2025-05-15T16:33:58.785744122Z" level=warning msg="container event discarded" container=b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c type=CONTAINER_STARTED_EVENT May 15 16:33:59.220121 containerd[1540]: time="2025-05-15T16:33:59.219991150Z" level=warning msg="container event discarded" container=b12363ca08dbb81fa7e4c9436ae6ff58373b0add0659ab0d41e8159ed5dbe76c type=CONTAINER_STOPPED_EVENT May 15 16:34:01.994237 containerd[1540]: time="2025-05-15T16:34:01.993941969Z" level=warning msg="container event discarded" container=7eb500d459a0501f30eabf89444ddb8819c6d4d770fb0a7f73a7961ae065fd36 type=CONTAINER_CREATED_EVENT May 15 16:34:02.102767 containerd[1540]: time="2025-05-15T16:34:02.101133899Z" level=warning msg="container event discarded" container=7eb500d459a0501f30eabf89444ddb8819c6d4d770fb0a7f73a7961ae065fd36 type=CONTAINER_STARTED_EVENT May 15 16:34:02.230566 containerd[1540]: time="2025-05-15T16:34:02.230416312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"f6443d63ab05255e046abd82fb71e4cc4ca0e38aec9c5abd235587928f619239\" pid:5624 exited_at:{seconds:1747326842 nanos:228135884}" May 15 16:34:03.435100 systemd[1]: Started sshd@24-172.24.4.121:22-172.24.4.1:46940.service - OpenSSH per-connection server daemon (172.24.4.1:46940). May 15 16:34:04.703271 sshd[5636]: Accepted publickey for core from 172.24.4.1 port 46940 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:34:04.708417 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:34:04.731310 systemd-logind[1497]: New session 27 of user core. May 15 16:34:04.741183 systemd[1]: Started session-27.scope - Session 27 of User core. May 15 16:34:05.578408 sshd[5638]: Connection closed by 172.24.4.1 port 46940 May 15 16:34:05.579781 sshd-session[5636]: pam_unix(sshd:session): session closed for user core May 15 16:34:05.587285 systemd-logind[1497]: Session 27 logged out. Waiting for processes to exit. May 15 16:34:05.589452 systemd[1]: sshd@24-172.24.4.121:22-172.24.4.1:46940.service: Deactivated successfully. May 15 16:34:05.597648 systemd[1]: session-27.scope: Deactivated successfully. May 15 16:34:05.604330 systemd-logind[1497]: Removed session 27. May 15 16:34:08.271978 containerd[1540]: time="2025-05-15T16:34:08.271664770Z" level=warning msg="container event discarded" container=6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e type=CONTAINER_CREATED_EVENT May 15 16:34:08.356300 containerd[1540]: time="2025-05-15T16:34:08.356160671Z" level=warning msg="container event discarded" container=6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e type=CONTAINER_STARTED_EVENT May 15 16:34:10.627724 systemd[1]: Started sshd@25-172.24.4.121:22-172.24.4.1:33834.service - OpenSSH per-connection server daemon (172.24.4.1:33834). May 15 16:34:10.951264 containerd[1540]: time="2025-05-15T16:34:10.950961849Z" level=warning msg="container event discarded" container=6fedfdce13296b3c2bbb792a8c37415e820a4bafb628ea2008e1a8f893b9529e type=CONTAINER_STOPPED_EVENT May 15 16:34:11.913801 sshd[5667]: Accepted publickey for core from 172.24.4.1 port 33834 ssh2: RSA SHA256:K4PuacMdCgnHSu7IHjQ1cPA+A+tWk4HKtDB5vg6uiYI May 15 16:34:11.920490 sshd-session[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 16:34:11.935985 systemd-logind[1497]: New session 28 of user core. May 15 16:34:11.944216 systemd[1]: Started session-28.scope - Session 28 of User core. May 15 16:34:12.767116 sshd[5669]: Connection closed by 172.24.4.1 port 33834 May 15 16:34:12.768462 sshd-session[5667]: pam_unix(sshd:session): session closed for user core May 15 16:34:12.776151 systemd[1]: sshd@25-172.24.4.121:22-172.24.4.1:33834.service: Deactivated successfully. May 15 16:34:12.783682 systemd[1]: session-28.scope: Deactivated successfully. May 15 16:34:12.789837 systemd-logind[1497]: Session 28 logged out. Waiting for processes to exit. May 15 16:34:12.793160 systemd-logind[1497]: Removed session 28. May 15 16:34:13.451323 containerd[1540]: time="2025-05-15T16:34:13.451273286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"f30d9fdcd82b179a7647b7663d3cba2670e656be7fb3c956d9a95a9df7e4946a\" pid:5693 exited_at:{seconds:1747326853 nanos:450815322}" May 15 16:34:20.966358 containerd[1540]: time="2025-05-15T16:34:20.966302536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"5f831c91305de62b88b6ad94e6d2dc09fc50fdbf69c95b8ddff15bb772f76839\" pid:5716 exited_at:{seconds:1747326860 nanos:965759665}" May 15 16:34:21.560106 containerd[1540]: time="2025-05-15T16:34:21.559693270Z" level=warning msg="container event discarded" container=9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c type=CONTAINER_CREATED_EVENT May 15 16:34:21.656441 containerd[1540]: time="2025-05-15T16:34:21.656283240Z" level=warning msg="container event discarded" container=9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c type=CONTAINER_STARTED_EVENT May 15 16:34:23.075930 containerd[1540]: time="2025-05-15T16:34:23.075588147Z" level=warning msg="container event discarded" container=6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538 type=CONTAINER_CREATED_EVENT May 15 16:34:23.075930 containerd[1540]: time="2025-05-15T16:34:23.075701528Z" level=warning msg="container event discarded" container=6b64b1cc3c78171f3d2ab88a71f7d9fcd3f1eb8cb6d2272de42419d80a704538 type=CONTAINER_STARTED_EVENT May 15 16:34:25.244418 containerd[1540]: time="2025-05-15T16:34:25.244046729Z" level=warning msg="container event discarded" container=df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd type=CONTAINER_CREATED_EVENT May 15 16:34:25.244418 containerd[1540]: time="2025-05-15T16:34:25.244359753Z" level=warning msg="container event discarded" container=df3ca0faba7ce16e74489ad5a5a2b9e068904622ef210b958613fcfc35b81afd type=CONTAINER_STARTED_EVENT May 15 16:34:25.278074 containerd[1540]: time="2025-05-15T16:34:25.277930902Z" level=warning msg="container event discarded" container=70bfe5cc51066bdf6869fac284ab93cce8e5dd5e5ec682b3753a7c59d2e9304d type=CONTAINER_CREATED_EVENT May 15 16:34:25.363209 containerd[1540]: time="2025-05-15T16:34:25.362992448Z" level=warning msg="container event discarded" container=70bfe5cc51066bdf6869fac284ab93cce8e5dd5e5ec682b3753a7c59d2e9304d type=CONTAINER_STARTED_EVENT May 15 16:34:25.410988 containerd[1540]: time="2025-05-15T16:34:25.410706246Z" level=warning msg="container event discarded" container=f573c1e452f80be11a700cde05c523a07acd9fbc699678b9b2c71042d834a06d type=CONTAINER_CREATED_EVENT May 15 16:34:25.508229 containerd[1540]: time="2025-05-15T16:34:25.507948965Z" level=warning msg="container event discarded" container=f573c1e452f80be11a700cde05c523a07acd9fbc699678b9b2c71042d834a06d type=CONTAINER_STARTED_EVENT May 15 16:34:26.161081 containerd[1540]: time="2025-05-15T16:34:26.160928943Z" level=warning msg="container event discarded" container=985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa type=CONTAINER_CREATED_EVENT May 15 16:34:26.161081 containerd[1540]: time="2025-05-15T16:34:26.161025131Z" level=warning msg="container event discarded" container=985c36a79fb4ba3ccc877080d475abcddf40d399f211cf1eb14eb0cb9996a6fa type=CONTAINER_STARTED_EVENT May 15 16:34:27.465312 containerd[1540]: time="2025-05-15T16:34:27.465148885Z" level=warning msg="container event discarded" container=7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d type=CONTAINER_CREATED_EVENT May 15 16:34:27.465312 containerd[1540]: time="2025-05-15T16:34:27.465252138Z" level=warning msg="container event discarded" container=7ca0ae33b7cf62bfd85372931ec70c094d0db7078d9b9f9fd97138c8a88f280d type=CONTAINER_STARTED_EVENT May 15 16:34:27.514954 containerd[1540]: time="2025-05-15T16:34:27.514706955Z" level=warning msg="container event discarded" container=4a80b3a2d5d5e6f7edd789862be6e51f642787760966e048117de6901b1cdc8b type=CONTAINER_CREATED_EVENT May 15 16:34:27.541595 containerd[1540]: time="2025-05-15T16:34:27.541396570Z" level=warning msg="container event discarded" container=f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f type=CONTAINER_CREATED_EVENT May 15 16:34:27.542122 containerd[1540]: time="2025-05-15T16:34:27.541994083Z" level=warning msg="container event discarded" container=f412ec1f115472b3a84aa5452b9787d89c1fc401bb53ed863b3531731f5c0f7f type=CONTAINER_STARTED_EVENT May 15 16:34:27.576622 containerd[1540]: time="2025-05-15T16:34:27.576443301Z" level=warning msg="container event discarded" container=174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8 type=CONTAINER_CREATED_EVENT May 15 16:34:27.576622 containerd[1540]: time="2025-05-15T16:34:27.576583352Z" level=warning msg="container event discarded" container=174d8a60d5f8c24442b10e83fe1ddd92dc7dddd5e2c081330798db4da46c73a8 type=CONTAINER_STARTED_EVENT May 15 16:34:27.625678 containerd[1540]: time="2025-05-15T16:34:27.625448950Z" level=warning msg="container event discarded" container=4a80b3a2d5d5e6f7edd789862be6e51f642787760966e048117de6901b1cdc8b type=CONTAINER_STARTED_EVENT May 15 16:34:28.818445 containerd[1540]: time="2025-05-15T16:34:28.818249476Z" level=warning msg="container event discarded" container=af17c03f4abaf78c6ce2d480fee745f73b91fb796267c9228ae951916f9e3c67 type=CONTAINER_CREATED_EVENT May 15 16:34:28.827718 update_engine[1498]: I20250515 16:34:28.827405 1498 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 15 16:34:28.829656 update_engine[1498]: I20250515 16:34:28.828828 1498 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 15 16:34:28.831081 update_engine[1498]: I20250515 16:34:28.831015 1498 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 15 16:34:28.833287 update_engine[1498]: I20250515 16:34:28.833197 1498 omaha_request_params.cc:62] Current group set to developer May 15 16:34:28.836161 update_engine[1498]: I20250515 16:34:28.835944 1498 update_attempter.cc:499] Already updated boot flags. Skipping. May 15 16:34:28.836161 update_engine[1498]: I20250515 16:34:28.836012 1498 update_attempter.cc:643] Scheduling an action processor start. May 15 16:34:28.836161 update_engine[1498]: I20250515 16:34:28.836092 1498 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 16:34:28.836538 update_engine[1498]: I20250515 16:34:28.836289 1498 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 15 16:34:28.836627 update_engine[1498]: I20250515 16:34:28.836526 1498 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 16:34:28.836627 update_engine[1498]: I20250515 16:34:28.836556 1498 omaha_request_action.cc:272] Request: May 15 16:34:28.836627 update_engine[1498]: May 15 16:34:28.836627 update_engine[1498]: May 15 16:34:28.836627 update_engine[1498]: May 15 16:34:28.836627 update_engine[1498]: May 15 16:34:28.836627 update_engine[1498]: May 15 16:34:28.836627 update_engine[1498]: May 15 16:34:28.836627 update_engine[1498]: May 15 16:34:28.836627 update_engine[1498]: May 15 16:34:28.836627 update_engine[1498]: I20250515 16:34:28.836591 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 16:34:28.861357 update_engine[1498]: I20250515 16:34:28.856511 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 16:34:28.861357 update_engine[1498]: I20250515 16:34:28.857862 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 16:34:28.864546 update_engine[1498]: E20250515 16:34:28.864209 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 16:34:28.864546 update_engine[1498]: I20250515 16:34:28.864418 1498 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 15 16:34:28.868209 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 15 16:34:28.964117 containerd[1540]: time="2025-05-15T16:34:28.963974228Z" level=warning msg="container event discarded" container=af17c03f4abaf78c6ce2d480fee745f73b91fb796267c9228ae951916f9e3c67 type=CONTAINER_STARTED_EVENT May 15 16:34:32.237807 containerd[1540]: time="2025-05-15T16:34:32.237673134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"57c85f925b063e02523f29038541897dddba3924e9b98b98573fdd6048d15728\" pid:5739 exited_at:{seconds:1747326872 nanos:236386727}" May 15 16:34:34.712707 containerd[1540]: time="2025-05-15T16:34:34.712560458Z" level=warning msg="container event discarded" container=0844c672694440f7310a93ac7ff54c9eaad327d1e5f9c9f765845a4bf716280b type=CONTAINER_CREATED_EVENT May 15 16:34:34.804720 containerd[1540]: time="2025-05-15T16:34:34.804521577Z" level=warning msg="container event discarded" container=0844c672694440f7310a93ac7ff54c9eaad327d1e5f9c9f765845a4bf716280b type=CONTAINER_STARTED_EVENT May 15 16:34:38.816343 update_engine[1498]: I20250515 16:34:38.816063 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 16:34:38.817795 update_engine[1498]: I20250515 16:34:38.816633 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 16:34:38.817795 update_engine[1498]: I20250515 16:34:38.817395 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 16:34:38.823391 update_engine[1498]: E20250515 16:34:38.823299 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 16:34:38.823607 update_engine[1498]: I20250515 16:34:38.823506 1498 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 15 16:34:39.676729 containerd[1540]: time="2025-05-15T16:34:39.676505256Z" level=warning msg="container event discarded" container=69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad type=CONTAINER_CREATED_EVENT May 15 16:34:39.786352 containerd[1540]: time="2025-05-15T16:34:39.786061346Z" level=warning msg="container event discarded" container=69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad type=CONTAINER_STARTED_EVENT May 15 16:34:40.408052 containerd[1540]: time="2025-05-15T16:34:40.407861287Z" level=warning msg="container event discarded" container=613fb8bebe20c5609f70dada63171b62e3a180a07ddb0ea84f994ce4d06a8f4d type=CONTAINER_CREATED_EVENT May 15 16:34:40.530769 containerd[1540]: time="2025-05-15T16:34:40.530578156Z" level=warning msg="container event discarded" container=613fb8bebe20c5609f70dada63171b62e3a180a07ddb0ea84f994ce4d06a8f4d type=CONTAINER_STARTED_EVENT May 15 16:34:43.473139 containerd[1540]: time="2025-05-15T16:34:43.472782156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"539e6d7a48d735b4408283f4eb40b2cb5a345c0156682da5df29f347210283e6\" pid:5766 exited_at:{seconds:1747326883 nanos:472167119}" May 15 16:34:48.820221 update_engine[1498]: I20250515 16:34:48.819615 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 16:34:48.824587 update_engine[1498]: I20250515 16:34:48.822782 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 16:34:48.824587 update_engine[1498]: I20250515 16:34:48.824390 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 16:34:48.830581 update_engine[1498]: E20250515 16:34:48.830430 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 16:34:48.830774 update_engine[1498]: I20250515 16:34:48.830668 1498 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 15 16:34:58.819106 update_engine[1498]: I20250515 16:34:58.818946 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 16:34:58.820470 update_engine[1498]: I20250515 16:34:58.819484 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 16:34:58.820470 update_engine[1498]: I20250515 16:34:58.820246 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 16:34:58.826046 update_engine[1498]: E20250515 16:34:58.825942 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 16:34:58.826046 update_engine[1498]: I20250515 16:34:58.826045 1498 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 16:34:58.826443 update_engine[1498]: I20250515 16:34:58.826114 1498 omaha_request_action.cc:617] Omaha request response: May 15 16:34:58.827027 update_engine[1498]: E20250515 16:34:58.826925 1498 omaha_request_action.cc:636] Omaha request network transfer failed. May 15 16:34:58.828100 update_engine[1498]: I20250515 16:34:58.828008 1498 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 15 16:34:58.828100 update_engine[1498]: I20250515 16:34:58.828047 1498 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 16:34:58.828100 update_engine[1498]: I20250515 16:34:58.828083 1498 update_attempter.cc:306] Processing Done. May 15 16:34:58.828484 update_engine[1498]: E20250515 16:34:58.828212 1498 update_attempter.cc:619] Update failed. May 15 16:34:58.828484 update_engine[1498]: I20250515 16:34:58.828245 1498 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 15 16:34:58.828484 update_engine[1498]: I20250515 16:34:58.828258 1498 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 15 16:34:58.828484 update_engine[1498]: I20250515 16:34:58.828270 1498 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 15 16:34:58.830610 update_engine[1498]: I20250515 16:34:58.829232 1498 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 15 16:34:58.830610 update_engine[1498]: I20250515 16:34:58.829442 1498 omaha_request_action.cc:271] Posting an Omaha request to disabled May 15 16:34:58.830610 update_engine[1498]: I20250515 16:34:58.829458 1498 omaha_request_action.cc:272] Request: May 15 16:34:58.830610 update_engine[1498]: May 15 16:34:58.830610 update_engine[1498]: May 15 16:34:58.830610 update_engine[1498]: May 15 16:34:58.830610 update_engine[1498]: May 15 16:34:58.830610 update_engine[1498]: May 15 16:34:58.830610 update_engine[1498]: May 15 16:34:58.830610 update_engine[1498]: I20250515 16:34:58.829471 1498 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 15 16:34:58.830610 update_engine[1498]: I20250515 16:34:58.829784 1498 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 15 16:34:58.830610 update_engine[1498]: I20250515 16:34:58.830448 1498 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 15 16:34:58.834029 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 15 16:34:58.836552 update_engine[1498]: E20250515 16:34:58.836112 1498 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 15 16:34:58.836552 update_engine[1498]: I20250515 16:34:58.836200 1498 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 15 16:34:58.836552 update_engine[1498]: I20250515 16:34:58.836217 1498 omaha_request_action.cc:617] Omaha request response: May 15 16:34:58.836552 update_engine[1498]: I20250515 16:34:58.836230 1498 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 16:34:58.836552 update_engine[1498]: I20250515 16:34:58.836241 1498 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 15 16:34:58.836552 update_engine[1498]: I20250515 16:34:58.836252 1498 update_attempter.cc:306] Processing Done. May 15 16:34:58.836552 update_engine[1498]: I20250515 16:34:58.836265 1498 update_attempter.cc:310] Error event sent. May 15 16:34:58.836552 update_engine[1498]: I20250515 16:34:58.836308 1498 update_check_scheduler.cc:74] Next update check in 45m13s May 15 16:34:58.837602 locksmithd[1522]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 15 16:35:02.250591 containerd[1540]: time="2025-05-15T16:35:02.250353128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fb8c480a3c07ebe3901902a36e1b83afd796667a830b576cce0ac3f3fc6ae9c\" id:\"9c4ad6f3c6f4d862991197dda7ea3eabc9b954eb8a2c66f7599e717eef050878\" pid:5799 exited_at:{seconds:1747326902 nanos:247225837}" May 15 16:35:13.461919 containerd[1540]: time="2025-05-15T16:35:13.461851775Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"99db5595910698f6eaa8f91be6141f3d8c1e8938a9247cf62555fde74236717a\" pid:5822 exited_at:{seconds:1747326913 nanos:461319371}" May 15 16:35:20.970184 containerd[1540]: time="2025-05-15T16:35:20.969977943Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69546987703db3a45feeff4060073fb07d1fdcc3f821e331c2801784aab0e2ad\" id:\"263fec4cebcc5745acd4ec497cc8a30f4244273af24b1e8bd97bdeaff3239326\" pid:5845 exited_at:{seconds:1747326920 nanos:969378775}"