May 13 12:51:51.942538 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 11:28:50 -00 2025 May 13 12:51:51.942565 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:51:51.942576 kernel: BIOS-provided physical RAM map: May 13 12:51:51.942589 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 12:51:51.942596 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 12:51:51.942604 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 12:51:51.942613 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 13 12:51:51.942620 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 13 12:51:51.942628 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 12:51:51.942636 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 12:51:51.942644 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 13 12:51:51.942651 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 12:51:51.942661 kernel: NX (Execute Disable) protection: active May 13 12:51:51.942668 kernel: APIC: Static calls initialized May 13 12:51:51.942677 kernel: SMBIOS 3.0.0 present. May 13 12:51:51.942686 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 13 12:51:51.942694 kernel: DMI: Memory slots populated: 1/1 May 13 12:51:51.942703 kernel: Hypervisor detected: KVM May 13 12:51:51.942711 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 12:51:51.942719 kernel: kvm-clock: using sched offset of 4725932401 cycles May 13 12:51:51.942727 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 12:51:51.942736 kernel: tsc: Detected 1996.249 MHz processor May 13 12:51:51.942744 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 12:51:51.942753 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 12:51:51.942762 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 13 12:51:51.942770 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 13 12:51:51.942780 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 12:51:51.942789 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 13 12:51:51.942797 kernel: ACPI: Early table checksum verification disabled May 13 12:51:51.942805 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 13 12:51:51.942813 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:51:51.942822 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:51:51.942830 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:51:51.942838 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 13 12:51:51.942846 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:51:51.942856 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:51:51.942865 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 13 12:51:51.942873 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 13 12:51:51.942881 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 13 12:51:51.942889 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 13 12:51:51.942901 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 13 12:51:51.942909 kernel: No NUMA configuration found May 13 12:51:51.942919 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 13 12:51:51.942928 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] May 13 12:51:51.942937 kernel: Zone ranges: May 13 12:51:51.942946 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 12:51:51.942954 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 12:51:51.942963 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 13 12:51:51.942971 kernel: Device empty May 13 12:51:51.942980 kernel: Movable zone start for each node May 13 12:51:51.942990 kernel: Early memory node ranges May 13 12:51:51.942998 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 12:51:51.943007 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 13 12:51:51.943015 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 13 12:51:51.943024 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 13 12:51:51.943033 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 12:51:51.943041 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 12:51:51.943050 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 13 12:51:51.943058 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 12:51:51.943068 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 12:51:51.943077 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 12:51:51.943085 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 12:51:51.943094 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 12:51:51.943102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 12:51:51.943111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 12:51:51.943119 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 12:51:51.943128 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 12:51:51.943136 kernel: CPU topo: Max. logical packages: 2 May 13 12:51:51.943147 kernel: CPU topo: Max. logical dies: 2 May 13 12:51:51.943155 kernel: CPU topo: Max. dies per package: 1 May 13 12:51:51.943163 kernel: CPU topo: Max. threads per core: 1 May 13 12:51:51.943172 kernel: CPU topo: Num. cores per package: 1 May 13 12:51:51.943180 kernel: CPU topo: Num. threads per package: 1 May 13 12:51:51.943189 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs May 13 12:51:51.943197 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 13 12:51:51.943206 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 13 12:51:51.943214 kernel: Booting paravirtualized kernel on KVM May 13 12:51:51.943225 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 12:51:51.943233 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 12:51:51.943242 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 May 13 12:51:51.943251 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 May 13 12:51:51.943259 kernel: pcpu-alloc: [0] 0 1 May 13 12:51:51.943268 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 12:51:51.943278 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:51:51.943287 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 12:51:51.943297 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 12:51:51.943306 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 12:51:51.943314 kernel: Fallback order for Node 0: 0 May 13 12:51:51.943323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 May 13 12:51:51.943331 kernel: Policy zone: Normal May 13 12:51:51.943340 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 12:51:51.943348 kernel: software IO TLB: area num 2. May 13 12:51:51.943357 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 12:51:51.943365 kernel: ftrace: allocating 40071 entries in 157 pages May 13 12:51:51.943375 kernel: ftrace: allocated 157 pages with 5 groups May 13 12:51:51.943384 kernel: Dynamic Preempt: voluntary May 13 12:51:51.943392 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 12:51:51.943401 kernel: rcu: RCU event tracing is enabled. May 13 12:51:51.943410 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 12:51:51.943419 kernel: Trampoline variant of Tasks RCU enabled. May 13 12:51:51.943428 kernel: Rude variant of Tasks RCU enabled. May 13 12:51:51.943436 kernel: Tracing variant of Tasks RCU enabled. May 13 12:51:51.943445 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 12:51:51.943453 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 12:51:51.943464 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 12:51:51.943473 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 12:51:51.943545 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 12:51:51.943554 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 12:51:51.943563 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 12:51:51.943572 kernel: Console: colour VGA+ 80x25 May 13 12:51:51.943580 kernel: printk: legacy console [tty0] enabled May 13 12:51:51.943589 kernel: printk: legacy console [ttyS0] enabled May 13 12:51:51.943597 kernel: ACPI: Core revision 20240827 May 13 12:51:51.943609 kernel: APIC: Switch to symmetric I/O mode setup May 13 12:51:51.947246 kernel: x2apic enabled May 13 12:51:51.947267 kernel: APIC: Switched APIC routing to: physical x2apic May 13 12:51:51.947276 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 12:51:51.947285 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 12:51:51.947303 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 13 12:51:51.947314 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 12:51:51.947323 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 12:51:51.947332 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 12:51:51.947341 kernel: Spectre V2 : Mitigation: Retpolines May 13 12:51:51.947351 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 12:51:51.947361 kernel: Speculative Store Bypass: Vulnerable May 13 12:51:51.947371 kernel: x86/fpu: x87 FPU will use FXSAVE May 13 12:51:51.947380 kernel: Freeing SMP alternatives memory: 32K May 13 12:51:51.947389 kernel: pid_max: default: 32768 minimum: 301 May 13 12:51:51.947398 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 12:51:51.947409 kernel: landlock: Up and running. May 13 12:51:51.947418 kernel: SELinux: Initializing. May 13 12:51:51.947427 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:51:51.947436 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:51:51.947445 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 13 12:51:51.947455 kernel: Performance Events: AMD PMU driver. May 13 12:51:51.947464 kernel: ... version: 0 May 13 12:51:51.947473 kernel: ... bit width: 48 May 13 12:51:51.947500 kernel: ... generic registers: 4 May 13 12:51:51.947511 kernel: ... value mask: 0000ffffffffffff May 13 12:51:51.947521 kernel: ... max period: 00007fffffffffff May 13 12:51:51.947529 kernel: ... fixed-purpose events: 0 May 13 12:51:51.947539 kernel: ... event mask: 000000000000000f May 13 12:51:51.947548 kernel: signal: max sigframe size: 1440 May 13 12:51:51.947556 kernel: rcu: Hierarchical SRCU implementation. May 13 12:51:51.947566 kernel: rcu: Max phase no-delay instances is 400. May 13 12:51:51.947575 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 13 12:51:51.947585 kernel: smp: Bringing up secondary CPUs ... May 13 12:51:51.947594 kernel: smpboot: x86: Booting SMP configuration: May 13 12:51:51.947604 kernel: .... node #0, CPUs: #1 May 13 12:51:51.947613 kernel: smp: Brought up 1 node, 2 CPUs May 13 12:51:51.947623 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 13 12:51:51.947632 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9948K rodata, 54420K init, 2548K bss, 227296K reserved, 0K cma-reserved) May 13 12:51:51.947642 kernel: devtmpfs: initialized May 13 12:51:51.947651 kernel: x86/mm: Memory block size: 128MB May 13 12:51:51.947660 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 12:51:51.947669 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 12:51:51.947678 kernel: pinctrl core: initialized pinctrl subsystem May 13 12:51:51.947689 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 12:51:51.947698 kernel: audit: initializing netlink subsys (disabled) May 13 12:51:51.947707 kernel: audit: type=2000 audit(1747140708.442:1): state=initialized audit_enabled=0 res=1 May 13 12:51:51.947716 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 12:51:51.947725 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 12:51:51.947735 kernel: cpuidle: using governor menu May 13 12:51:51.947744 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 12:51:51.947753 kernel: dca service started, version 1.12.1 May 13 12:51:51.947762 kernel: PCI: Using configuration type 1 for base access May 13 12:51:51.947773 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 12:51:51.947782 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 12:51:51.947791 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 12:51:51.947800 kernel: ACPI: Added _OSI(Module Device) May 13 12:51:51.947809 kernel: ACPI: Added _OSI(Processor Device) May 13 12:51:51.947818 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 12:51:51.947827 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 12:51:51.947837 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 12:51:51.947846 kernel: ACPI: Interpreter enabled May 13 12:51:51.947857 kernel: ACPI: PM: (supports S0 S3 S5) May 13 12:51:51.947866 kernel: ACPI: Using IOAPIC for interrupt routing May 13 12:51:51.947875 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 12:51:51.947884 kernel: PCI: Using E820 reservations for host bridge windows May 13 12:51:51.947893 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 12:51:51.947902 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 12:51:51.948053 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 12:51:51.948145 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 13 12:51:51.948236 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 13 12:51:51.948250 kernel: acpiphp: Slot [3] registered May 13 12:51:51.948260 kernel: acpiphp: Slot [4] registered May 13 12:51:51.948268 kernel: acpiphp: Slot [5] registered May 13 12:51:51.948277 kernel: acpiphp: Slot [6] registered May 13 12:51:51.948286 kernel: acpiphp: Slot [7] registered May 13 12:51:51.948295 kernel: acpiphp: Slot [8] registered May 13 12:51:51.948305 kernel: acpiphp: Slot [9] registered May 13 12:51:51.948316 kernel: acpiphp: Slot [10] registered May 13 12:51:51.948326 kernel: acpiphp: Slot [11] registered May 13 12:51:51.948335 kernel: acpiphp: Slot [12] registered May 13 12:51:51.948344 kernel: acpiphp: Slot [13] registered May 13 12:51:51.948353 kernel: acpiphp: Slot [14] registered May 13 12:51:51.948362 kernel: acpiphp: Slot [15] registered May 13 12:51:51.948371 kernel: acpiphp: Slot [16] registered May 13 12:51:51.948380 kernel: acpiphp: Slot [17] registered May 13 12:51:51.948389 kernel: acpiphp: Slot [18] registered May 13 12:51:51.948399 kernel: acpiphp: Slot [19] registered May 13 12:51:51.948408 kernel: acpiphp: Slot [20] registered May 13 12:51:51.948417 kernel: acpiphp: Slot [21] registered May 13 12:51:51.948426 kernel: acpiphp: Slot [22] registered May 13 12:51:51.948435 kernel: acpiphp: Slot [23] registered May 13 12:51:51.948444 kernel: acpiphp: Slot [24] registered May 13 12:51:51.948453 kernel: acpiphp: Slot [25] registered May 13 12:51:51.948462 kernel: acpiphp: Slot [26] registered May 13 12:51:51.948471 kernel: acpiphp: Slot [27] registered May 13 12:51:51.948497 kernel: acpiphp: Slot [28] registered May 13 12:51:51.948508 kernel: acpiphp: Slot [29] registered May 13 12:51:51.948517 kernel: acpiphp: Slot [30] registered May 13 12:51:51.948526 kernel: acpiphp: Slot [31] registered May 13 12:51:51.948535 kernel: PCI host bridge to bus 0000:00 May 13 12:51:51.948633 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 12:51:51.948714 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 12:51:51.948791 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 12:51:51.948867 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 12:51:51.948947 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 13 12:51:51.949022 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 12:51:51.949123 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint May 13 12:51:51.949223 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint May 13 12:51:51.949322 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint May 13 12:51:51.949412 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] May 13 12:51:51.949537 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk May 13 12:51:51.949631 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk May 13 12:51:51.949726 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk May 13 12:51:51.949819 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk May 13 12:51:51.949928 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint May 13 12:51:51.950023 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 12:51:51.950121 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 12:51:51.950223 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint May 13 12:51:51.950320 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] May 13 12:51:51.950414 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] May 13 12:51:51.952562 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] May 13 12:51:51.952665 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] May 13 12:51:51.952756 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 12:51:51.952859 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 13 12:51:51.952949 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] May 13 12:51:51.953036 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] May 13 12:51:51.953123 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] May 13 12:51:51.953211 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] May 13 12:51:51.953305 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 13 12:51:51.953393 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] May 13 12:51:51.953516 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] May 13 12:51:51.953608 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] May 13 12:51:51.953707 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint May 13 12:51:51.953796 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] May 13 12:51:51.953884 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] May 13 12:51:51.953978 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 13 12:51:51.954066 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] May 13 12:51:51.954158 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] May 13 12:51:51.954247 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] May 13 12:51:51.954261 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 12:51:51.954271 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 12:51:51.954280 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 12:51:51.954289 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 12:51:51.954299 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 12:51:51.954308 kernel: iommu: Default domain type: Translated May 13 12:51:51.954320 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 12:51:51.954329 kernel: PCI: Using ACPI for IRQ routing May 13 12:51:51.954338 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 12:51:51.954348 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 12:51:51.954357 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 13 12:51:51.954442 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 12:51:51.955793 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 12:51:51.955886 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 12:51:51.955900 kernel: vgaarb: loaded May 13 12:51:51.955913 kernel: clocksource: Switched to clocksource kvm-clock May 13 12:51:51.955922 kernel: VFS: Disk quotas dquot_6.6.0 May 13 12:51:51.955932 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 12:51:51.955941 kernel: pnp: PnP ACPI init May 13 12:51:51.956030 kernel: pnp 00:03: [dma 2] May 13 12:51:51.956045 kernel: pnp: PnP ACPI: found 5 devices May 13 12:51:51.956055 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 12:51:51.956064 kernel: NET: Registered PF_INET protocol family May 13 12:51:51.956076 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 12:51:51.956086 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 12:51:51.956095 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 12:51:51.956104 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 12:51:51.956114 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 12:51:51.956123 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 12:51:51.956132 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:51:51.956141 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:51:51.956150 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 12:51:51.956162 kernel: NET: Registered PF_XDP protocol family May 13 12:51:51.956247 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 12:51:51.956334 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 12:51:51.956409 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 12:51:51.956512 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 13 12:51:51.956592 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 13 12:51:51.956682 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 12:51:51.956771 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 12:51:51.956789 kernel: PCI: CLS 0 bytes, default 64 May 13 12:51:51.956799 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 12:51:51.956808 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 13 12:51:51.956818 kernel: Initialise system trusted keyrings May 13 12:51:51.956827 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 12:51:51.956836 kernel: Key type asymmetric registered May 13 12:51:51.956845 kernel: Asymmetric key parser 'x509' registered May 13 12:51:51.956855 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 12:51:51.956864 kernel: io scheduler mq-deadline registered May 13 12:51:51.956875 kernel: io scheduler kyber registered May 13 12:51:51.956884 kernel: io scheduler bfq registered May 13 12:51:51.956893 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 12:51:51.956903 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 12:51:51.956912 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 12:51:51.956922 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 12:51:51.956931 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 12:51:51.956940 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 12:51:51.956949 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 12:51:51.956960 kernel: random: crng init done May 13 12:51:51.956969 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 12:51:51.956978 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 12:51:51.956988 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 12:51:51.956997 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 12:51:51.957087 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 12:51:51.957167 kernel: rtc_cmos 00:04: registered as rtc0 May 13 12:51:51.957246 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T12:51:51 UTC (1747140711) May 13 12:51:51.957328 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 12:51:51.957341 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 13 12:51:51.957351 kernel: NET: Registered PF_INET6 protocol family May 13 12:51:51.957360 kernel: Segment Routing with IPv6 May 13 12:51:51.957369 kernel: In-situ OAM (IOAM) with IPv6 May 13 12:51:51.957378 kernel: NET: Registered PF_PACKET protocol family May 13 12:51:51.957387 kernel: Key type dns_resolver registered May 13 12:51:51.957396 kernel: IPI shorthand broadcast: enabled May 13 12:51:51.957405 kernel: sched_clock: Marking stable (3522007889, 179251574)->(3733397266, -32137803) May 13 12:51:51.957417 kernel: registered taskstats version 1 May 13 12:51:51.957426 kernel: Loading compiled-in X.509 certificates May 13 12:51:51.957435 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: d81efc2839896c91a2830d4cfad7b0572af8b26a' May 13 12:51:51.957444 kernel: Demotion targets for Node 0: null May 13 12:51:51.957453 kernel: Key type .fscrypt registered May 13 12:51:51.957462 kernel: Key type fscrypt-provisioning registered May 13 12:51:51.957471 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 12:51:51.957923 kernel: ima: Allocated hash algorithm: sha1 May 13 12:51:51.957937 kernel: ima: No architecture policies found May 13 12:51:51.957947 kernel: clk: Disabling unused clocks May 13 12:51:51.957957 kernel: Warning: unable to open an initial console. May 13 12:51:51.957967 kernel: Freeing unused kernel image (initmem) memory: 54420K May 13 12:51:51.957977 kernel: Write protecting the kernel read-only data: 24576k May 13 12:51:51.957986 kernel: Freeing unused kernel image (rodata/data gap) memory: 292K May 13 12:51:51.957996 kernel: Run /init as init process May 13 12:51:51.958006 kernel: with arguments: May 13 12:51:51.958016 kernel: /init May 13 12:51:51.958027 kernel: with environment: May 13 12:51:51.958037 kernel: HOME=/ May 13 12:51:51.958047 kernel: TERM=linux May 13 12:51:51.958056 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 12:51:51.958068 systemd[1]: Successfully made /usr/ read-only. May 13 12:51:51.958082 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:51:51.958096 systemd[1]: Detected virtualization kvm. May 13 12:51:51.958113 systemd[1]: Detected architecture x86-64. May 13 12:51:51.958125 systemd[1]: Running in initrd. May 13 12:51:51.958136 systemd[1]: No hostname configured, using default hostname. May 13 12:51:51.958147 systemd[1]: Hostname set to . May 13 12:51:51.958157 systemd[1]: Initializing machine ID from VM UUID. May 13 12:51:51.958168 systemd[1]: Queued start job for default target initrd.target. May 13 12:51:51.958180 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:51:51.958193 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:51:51.958205 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 12:51:51.958215 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:51:51.958227 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 12:51:51.958238 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 12:51:51.958250 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 12:51:51.958263 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 12:51:51.958274 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:51:51.958285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:51:51.958296 systemd[1]: Reached target paths.target - Path Units. May 13 12:51:51.958307 systemd[1]: Reached target slices.target - Slice Units. May 13 12:51:51.958317 systemd[1]: Reached target swap.target - Swaps. May 13 12:51:51.958329 systemd[1]: Reached target timers.target - Timer Units. May 13 12:51:51.958340 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:51:51.958350 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:51:51.958362 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 12:51:51.958372 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 12:51:51.958382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:51:51.958392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:51:51.958403 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:51:51.958412 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:51:51.958423 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 12:51:51.958433 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:51:51.958444 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 12:51:51.958455 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 12:51:51.958468 systemd[1]: Starting systemd-fsck-usr.service... May 13 12:51:51.958500 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:51:51.958511 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:51:51.958523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:51:51.958534 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 12:51:51.958544 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:51:51.958578 systemd-journald[212]: Collecting audit messages is disabled. May 13 12:51:51.958605 systemd[1]: Finished systemd-fsck-usr.service. May 13 12:51:51.958616 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:51:51.958627 systemd-journald[212]: Journal started May 13 12:51:51.958652 systemd-journald[212]: Runtime Journal (/run/log/journal/77d0e71ea5fa4bbe81a58a788e30116c) is 8M, max 78.5M, 70.5M free. May 13 12:51:51.960499 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:51:51.964674 systemd-modules-load[213]: Inserted module 'overlay' May 13 12:51:51.966680 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:51:51.982720 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:51:52.030776 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 12:51:52.030805 kernel: Bridge firewalling registered May 13 12:51:51.992819 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 12:51:51.998530 systemd-modules-load[213]: Inserted module 'br_netfilter' May 13 12:51:52.031274 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:51:52.032665 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:51:52.033603 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:51:52.036588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 12:51:52.041687 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:51:52.046080 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:51:52.053435 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:51:52.056597 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:51:52.069060 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:51:52.074517 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:51:52.078576 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 12:51:52.101993 systemd-resolved[241]: Positive Trust Anchors: May 13 12:51:52.102004 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:51:52.102045 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:51:52.104971 systemd-resolved[241]: Defaulting to hostname 'linux'. May 13 12:51:52.105845 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:51:52.108732 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:51:52.128434 dracut-cmdline[253]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7099d7ee582d4f3e6d25a3763207cfa25fb4eb117c83034e2c517b959b8370a1 May 13 12:51:52.242513 kernel: SCSI subsystem initialized May 13 12:51:52.253595 kernel: Loading iSCSI transport class v2.0-870. May 13 12:51:52.265950 kernel: iscsi: registered transport (tcp) May 13 12:51:52.288664 kernel: iscsi: registered transport (qla4xxx) May 13 12:51:52.288725 kernel: QLogic iSCSI HBA Driver May 13 12:51:52.313460 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:51:52.327948 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:51:52.334248 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:51:52.397293 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 12:51:52.399353 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 12:51:52.482546 kernel: raid6: sse2x4 gen() 8550 MB/s May 13 12:51:52.500575 kernel: raid6: sse2x2 gen() 14677 MB/s May 13 12:51:52.518887 kernel: raid6: sse2x1 gen() 9839 MB/s May 13 12:51:52.518941 kernel: raid6: using algorithm sse2x2 gen() 14677 MB/s May 13 12:51:52.537876 kernel: raid6: .... xor() 9451 MB/s, rmw enabled May 13 12:51:52.537928 kernel: raid6: using ssse3x2 recovery algorithm May 13 12:51:52.559518 kernel: xor: measuring software checksum speed May 13 12:51:52.559567 kernel: prefetch64-sse : 16622 MB/sec May 13 12:51:52.562038 kernel: generic_sse : 15685 MB/sec May 13 12:51:52.562074 kernel: xor: using function: prefetch64-sse (16622 MB/sec) May 13 12:51:52.760558 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 12:51:52.770206 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 12:51:52.775247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:51:52.804671 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 13 12:51:52.810626 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:51:52.817454 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 12:51:52.851268 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 13 12:51:52.887829 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:51:52.893048 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:51:52.949946 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:51:52.956596 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 12:51:53.034525 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 13 12:51:53.051505 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 13 12:51:53.070516 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 13 12:51:53.075497 kernel: libata version 3.00 loaded. May 13 12:51:53.076191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:51:53.076340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:51:53.077754 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:51:53.080688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:51:53.082697 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 12:51:53.095687 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 12:51:53.095744 kernel: GPT:17805311 != 20971519 May 13 12:51:53.095757 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 12:51:53.095769 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 12:51:53.095929 kernel: GPT:17805311 != 20971519 May 13 12:51:53.095942 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 12:51:53.095953 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:51:53.099712 kernel: scsi host0: ata_piix May 13 12:51:53.101695 kernel: scsi host1: ata_piix May 13 12:51:53.104971 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 May 13 12:51:53.105002 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 May 13 12:51:53.172165 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 12:51:53.187888 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 12:51:53.188702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:51:53.198096 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 12:51:53.198709 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 12:51:53.210285 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:51:53.212590 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 12:51:53.264572 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:51:53.264897 disk-uuid[556]: Primary Header is updated. May 13 12:51:53.264897 disk-uuid[556]: Secondary Entries is updated. May 13 12:51:53.264897 disk-uuid[556]: Secondary Header is updated. May 13 12:51:53.416899 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 12:51:53.422056 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:51:53.422660 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:51:53.423877 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:51:53.425885 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 12:51:53.451078 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 12:51:54.300537 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:51:54.301243 disk-uuid[557]: The operation has completed successfully. May 13 12:51:54.383821 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 12:51:54.384595 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 12:51:54.432506 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 12:51:54.461014 sh[581]: Success May 13 12:51:54.499133 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 12:51:54.499252 kernel: device-mapper: uevent: version 1.0.3 May 13 12:51:54.501696 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 12:51:54.524532 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" May 13 12:51:54.602625 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 12:51:54.611627 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 12:51:54.617954 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 12:51:54.657585 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 12:51:54.657684 kernel: BTRFS: device fsid 3042589c-b63f-42f0-9a6f-a4369b1889f9 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (593) May 13 12:51:54.673264 kernel: BTRFS info (device dm-0): first mount of filesystem 3042589c-b63f-42f0-9a6f-a4369b1889f9 May 13 12:51:54.673328 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 12:51:54.679841 kernel: BTRFS info (device dm-0): using free-space-tree May 13 12:51:54.701762 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 12:51:54.703885 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 12:51:54.705914 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 12:51:54.708712 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 12:51:54.714716 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 12:51:54.758866 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (628) May 13 12:51:54.766329 kernel: BTRFS info (device vda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:51:54.766354 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:51:54.766368 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:51:54.779645 kernel: BTRFS info (device vda6): last unmount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:51:54.779980 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 12:51:54.782766 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 12:51:54.849799 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:51:54.853628 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:51:54.896920 systemd-networkd[764]: lo: Link UP May 13 12:51:54.896928 systemd-networkd[764]: lo: Gained carrier May 13 12:51:54.899286 systemd-networkd[764]: Enumeration completed May 13 12:51:54.900127 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:51:54.900131 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:51:54.901710 systemd-networkd[764]: eth0: Link UP May 13 12:51:54.901714 systemd-networkd[764]: eth0: Gained carrier May 13 12:51:54.901723 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:51:54.912095 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:51:54.912792 systemd[1]: Reached target network.target - Network. May 13 12:51:54.913540 systemd-networkd[764]: eth0: DHCPv4 address 172.24.4.211/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 12:51:54.990909 ignition[676]: Ignition 2.21.0 May 13 12:51:54.990923 ignition[676]: Stage: fetch-offline May 13 12:51:54.990967 ignition[676]: no configs at "/usr/lib/ignition/base.d" May 13 12:51:54.990982 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 12:51:54.993591 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:51:54.991096 ignition[676]: parsed url from cmdline: "" May 13 12:51:54.995471 systemd-resolved[241]: Detected conflict on linux IN A 172.24.4.211 May 13 12:51:54.991100 ignition[676]: no config URL provided May 13 12:51:54.995528 systemd-resolved[241]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. May 13 12:51:54.991106 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" May 13 12:51:54.995809 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 12:51:54.991114 ignition[676]: no config at "/usr/lib/ignition/user.ign" May 13 12:51:54.991124 ignition[676]: failed to fetch config: resource requires networking May 13 12:51:54.992187 ignition[676]: Ignition finished successfully May 13 12:51:55.027226 ignition[774]: Ignition 2.21.0 May 13 12:51:55.027252 ignition[774]: Stage: fetch May 13 12:51:55.029325 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 13 12:51:55.029352 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 12:51:55.029584 ignition[774]: parsed url from cmdline: "" May 13 12:51:55.029594 ignition[774]: no config URL provided May 13 12:51:55.029607 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" May 13 12:51:55.029627 ignition[774]: no config at "/usr/lib/ignition/user.ign" May 13 12:51:55.030465 ignition[774]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 13 12:51:55.031174 ignition[774]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 13 12:51:55.031201 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 13 12:51:55.436663 ignition[774]: GET result: OK May 13 12:51:55.436853 ignition[774]: parsing config with SHA512: e82e89012dc18e68e0f41f2a12080ac6293e1d1efda2c9e97412642d6fc61785477bc8b0fa464cec2b7d1b81b7913a8e238e41f250b8fe58bccbf531448015e4 May 13 12:51:55.446604 unknown[774]: fetched base config from "system" May 13 12:51:55.446630 unknown[774]: fetched base config from "system" May 13 12:51:55.447364 ignition[774]: fetch: fetch complete May 13 12:51:55.446644 unknown[774]: fetched user config from "openstack" May 13 12:51:55.447376 ignition[774]: fetch: fetch passed May 13 12:51:55.452796 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 12:51:55.447458 ignition[774]: Ignition finished successfully May 13 12:51:55.457745 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 12:51:55.516870 ignition[781]: Ignition 2.21.0 May 13 12:51:55.516905 ignition[781]: Stage: kargs May 13 12:51:55.517206 ignition[781]: no configs at "/usr/lib/ignition/base.d" May 13 12:51:55.517232 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 12:51:55.519103 ignition[781]: kargs: kargs passed May 13 12:51:55.522227 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 12:51:55.519196 ignition[781]: Ignition finished successfully May 13 12:51:55.527141 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 12:51:55.573766 ignition[788]: Ignition 2.21.0 May 13 12:51:55.573794 ignition[788]: Stage: disks May 13 12:51:55.574035 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 13 12:51:55.577771 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 12:51:55.574055 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 12:51:55.575283 ignition[788]: disks: disks passed May 13 12:51:55.580852 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 12:51:55.575322 ignition[788]: Ignition finished successfully May 13 12:51:55.582316 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 12:51:55.584333 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:51:55.586218 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:51:55.587930 systemd[1]: Reached target basic.target - Basic System. May 13 12:51:55.591770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 12:51:55.628086 systemd-fsck[796]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 13 12:51:55.643653 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 12:51:55.648197 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 12:51:55.842503 kernel: EXT4-fs (vda9): mounted filesystem ebf7ca75-051f-4154-b098-5ec24084105d r/w with ordered data mode. Quota mode: none. May 13 12:51:55.843992 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 12:51:55.846411 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 12:51:55.860159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:51:55.864400 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 12:51:55.868309 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 12:51:55.873732 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 13 12:51:55.879873 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 12:51:55.879960 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:51:55.902708 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (804) May 13 12:51:55.902757 kernel: BTRFS info (device vda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:51:55.902789 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:51:55.902819 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:51:55.893719 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 12:51:55.908624 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 12:51:55.927811 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:51:56.066552 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:51:56.066949 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory May 13 12:51:56.076002 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory May 13 12:51:56.081122 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory May 13 12:51:56.085820 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory May 13 12:51:56.177737 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 12:51:56.179608 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 12:51:56.180704 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 12:51:56.197890 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 12:51:56.201508 kernel: BTRFS info (device vda6): last unmount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:51:56.218567 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 12:51:56.228082 ignition[923]: INFO : Ignition 2.21.0 May 13 12:51:56.230009 ignition[923]: INFO : Stage: mount May 13 12:51:56.230009 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:51:56.230009 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 12:51:56.230009 ignition[923]: INFO : mount: mount passed May 13 12:51:56.230009 ignition[923]: INFO : Ignition finished successfully May 13 12:51:56.231581 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 12:51:56.668880 systemd-networkd[764]: eth0: Gained IPv6LL May 13 12:51:57.107528 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:51:59.119567 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:52:03.130537 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:52:03.138931 coreos-metadata[806]: May 13 12:52:03.138 WARN failed to locate config-drive, using the metadata service API instead May 13 12:52:03.180075 coreos-metadata[806]: May 13 12:52:03.179 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 12:52:03.195273 coreos-metadata[806]: May 13 12:52:03.195 INFO Fetch successful May 13 12:52:03.197627 coreos-metadata[806]: May 13 12:52:03.196 INFO wrote hostname ci-9999-9-100-4cb33ef211.novalocal to /sysroot/etc/hostname May 13 12:52:03.201250 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 13 12:52:03.201560 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 13 12:52:03.209313 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 12:52:03.239121 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:52:03.275579 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (939) May 13 12:52:03.283516 kernel: BTRFS info (device vda6): first mount of filesystem 00c8da9a-330c-44ff-bf12-f9831c2c14e1 May 13 12:52:03.283581 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 12:52:03.287794 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:52:03.302151 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:52:03.355532 ignition[957]: INFO : Ignition 2.21.0 May 13 12:52:03.355532 ignition[957]: INFO : Stage: files May 13 12:52:03.355532 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:52:03.355532 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 12:52:03.362764 ignition[957]: DEBUG : files: compiled without relabeling support, skipping May 13 12:52:03.366609 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 12:52:03.366609 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 12:52:03.370779 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 12:52:03.370779 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 12:52:03.374932 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 12:52:03.374932 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 12:52:03.374932 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 13 12:52:03.372133 unknown[957]: wrote ssh authorized keys file for user: core May 13 12:52:03.458524 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 12:52:03.813344 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 13 12:52:03.815101 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 12:52:03.815101 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 12:52:03.815101 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 12:52:03.815101 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 12:52:03.815101 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:52:03.815101 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:52:03.815101 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:52:03.815101 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:52:03.832355 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:52:03.832355 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:52:03.832355 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:52:03.832355 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:52:03.832355 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:52:03.832355 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 12:52:04.599266 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 12:52:06.956625 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 12:52:06.956625 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 12:52:06.961824 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:52:06.967300 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:52:06.967300 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 12:52:06.967300 ignition[957]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 12:52:06.975305 ignition[957]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 12:52:06.975305 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 12:52:06.975305 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 12:52:06.975305 ignition[957]: INFO : files: files passed May 13 12:52:06.975305 ignition[957]: INFO : Ignition finished successfully May 13 12:52:06.969854 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 12:52:06.977440 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 12:52:06.986723 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 12:52:07.001899 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 12:52:07.002664 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 12:52:07.012768 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:52:07.013941 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 12:52:07.016229 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:52:07.019154 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:52:07.021008 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 12:52:07.022386 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 12:52:07.103099 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 12:52:07.103338 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 12:52:07.106897 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 12:52:07.109231 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 12:52:07.113707 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 12:52:07.117769 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 12:52:07.160545 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:52:07.165153 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 12:52:07.208302 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 12:52:07.210155 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:52:07.213400 systemd[1]: Stopped target timers.target - Timer Units. May 13 12:52:07.216508 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 12:52:07.216909 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:52:07.219875 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 12:52:07.221825 systemd[1]: Stopped target basic.target - Basic System. May 13 12:52:07.224642 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 12:52:07.227320 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:52:07.229902 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 12:52:07.232956 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 12:52:07.236109 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 12:52:07.239111 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:52:07.242203 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 12:52:07.245170 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 12:52:07.248220 systemd[1]: Stopped target swap.target - Swaps. May 13 12:52:07.250911 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 12:52:07.251187 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 12:52:07.254516 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 12:52:07.256456 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:52:07.258889 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 12:52:07.259737 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:52:07.261973 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 12:52:07.262348 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 12:52:07.266200 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 12:52:07.266693 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:52:07.269734 systemd[1]: ignition-files.service: Deactivated successfully. May 13 12:52:07.270004 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 12:52:07.279612 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 12:52:07.281429 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 12:52:07.281791 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:52:07.289872 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 12:52:07.293364 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 12:52:07.293807 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:52:07.297591 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 12:52:07.297875 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:52:07.318284 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 12:52:07.318383 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 12:52:07.326527 ignition[1010]: INFO : Ignition 2.21.0 May 13 12:52:07.326527 ignition[1010]: INFO : Stage: umount May 13 12:52:07.326527 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:52:07.326527 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 12:52:07.330449 ignition[1010]: INFO : umount: umount passed May 13 12:52:07.330449 ignition[1010]: INFO : Ignition finished successfully May 13 12:52:07.333177 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 12:52:07.333987 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 12:52:07.336174 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 12:52:07.336727 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 12:52:07.336772 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 12:52:07.337858 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 12:52:07.337902 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 12:52:07.338410 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 12:52:07.338452 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 12:52:07.338982 systemd[1]: Stopped target network.target - Network. May 13 12:52:07.339450 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 12:52:07.339567 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:52:07.340765 systemd[1]: Stopped target paths.target - Path Units. May 13 12:52:07.341738 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 12:52:07.345537 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:52:07.346233 systemd[1]: Stopped target slices.target - Slice Units. May 13 12:52:07.347238 systemd[1]: Stopped target sockets.target - Socket Units. May 13 12:52:07.348426 systemd[1]: iscsid.socket: Deactivated successfully. May 13 12:52:07.348460 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:52:07.349686 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 12:52:07.349717 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:52:07.350682 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 12:52:07.350733 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 12:52:07.351653 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 12:52:07.351692 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 12:52:07.352698 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 12:52:07.353710 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 12:52:07.358876 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 12:52:07.358970 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 12:52:07.360099 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 12:52:07.360172 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 12:52:07.363452 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 12:52:07.363613 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 12:52:07.367616 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 12:52:07.367841 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 12:52:07.367942 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 12:52:07.370574 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 12:52:07.371195 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 12:52:07.372117 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 12:52:07.372160 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 12:52:07.373987 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 12:52:07.375817 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 12:52:07.375897 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:52:07.376456 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:52:07.376515 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:52:07.378645 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 12:52:07.378705 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 12:52:07.379548 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 12:52:07.379593 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:52:07.381034 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:52:07.383083 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 12:52:07.383147 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 12:52:07.390163 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 12:52:07.390342 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:52:07.391245 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 12:52:07.391286 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 12:52:07.392079 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 12:52:07.392111 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:52:07.393291 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 12:52:07.393338 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 12:52:07.395077 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 12:52:07.395119 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 12:52:07.395880 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 12:52:07.395920 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:52:07.397726 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 12:52:07.398652 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 12:52:07.398704 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:52:07.401740 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 12:52:07.401795 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:52:07.404361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:52:07.404409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:52:07.408565 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 13 12:52:07.408620 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 12:52:07.408661 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 12:52:07.409018 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 12:52:07.409599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 12:52:07.413126 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 12:52:07.413227 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 12:52:07.414307 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 12:52:07.416093 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 12:52:07.444158 systemd[1]: Switching root. May 13 12:52:07.480309 systemd-journald[212]: Journal stopped May 13 12:52:09.210786 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). May 13 12:52:09.210846 kernel: SELinux: policy capability network_peer_controls=1 May 13 12:52:09.210866 kernel: SELinux: policy capability open_perms=1 May 13 12:52:09.210878 kernel: SELinux: policy capability extended_socket_class=1 May 13 12:52:09.210889 kernel: SELinux: policy capability always_check_network=0 May 13 12:52:09.210900 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 12:52:09.210911 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 12:52:09.210922 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 12:52:09.210936 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 12:52:09.210949 kernel: SELinux: policy capability userspace_initial_context=0 May 13 12:52:09.210961 kernel: audit: type=1403 audit(1747140727.928:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 12:52:09.210974 systemd[1]: Successfully loaded SELinux policy in 84.699ms. May 13 12:52:09.210997 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 26.155ms. May 13 12:52:09.211010 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:52:09.211025 systemd[1]: Detected virtualization kvm. May 13 12:52:09.211037 systemd[1]: Detected architecture x86-64. May 13 12:52:09.211049 systemd[1]: Detected first boot. May 13 12:52:09.211064 systemd[1]: Hostname set to . May 13 12:52:09.211076 systemd[1]: Initializing machine ID from VM UUID. May 13 12:52:09.211088 zram_generator::config[1055]: No configuration found. May 13 12:52:09.211101 kernel: Guest personality initialized and is inactive May 13 12:52:09.211111 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 13 12:52:09.211123 kernel: Initialized host personality May 13 12:52:09.211133 kernel: NET: Registered PF_VSOCK protocol family May 13 12:52:09.211145 systemd[1]: Populated /etc with preset unit settings. May 13 12:52:09.211159 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 12:52:09.211171 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 12:52:09.211183 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 12:52:09.211198 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 12:52:09.211213 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 12:52:09.211225 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 12:52:09.211237 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 12:52:09.211249 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 12:52:09.211263 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 12:52:09.211277 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 12:52:09.211291 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 12:52:09.211303 systemd[1]: Created slice user.slice - User and Session Slice. May 13 12:52:09.211314 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:52:09.211327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:52:09.211338 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 12:52:09.211351 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 12:52:09.211365 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 12:52:09.211377 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:52:09.211389 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 12:52:09.211401 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:52:09.211413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:52:09.211425 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 12:52:09.211437 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 12:52:09.211449 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 12:52:09.211463 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 12:52:09.211746 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:52:09.211769 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:52:09.211781 systemd[1]: Reached target slices.target - Slice Units. May 13 12:52:09.211795 systemd[1]: Reached target swap.target - Swaps. May 13 12:52:09.211807 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 12:52:09.211819 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 12:52:09.211832 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 12:52:09.211844 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:52:09.211860 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:52:09.211872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:52:09.211884 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 12:52:09.211896 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 12:52:09.211908 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 12:52:09.211920 systemd[1]: Mounting media.mount - External Media Directory... May 13 12:52:09.211933 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:52:09.211945 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 12:52:09.211956 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 12:52:09.211970 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 12:52:09.211983 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 12:52:09.211995 systemd[1]: Reached target machines.target - Containers. May 13 12:52:09.212006 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 12:52:09.212018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:52:09.212031 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:52:09.212043 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 12:52:09.212055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:52:09.212070 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:52:09.212083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:52:09.212095 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 12:52:09.212107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:52:09.212120 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 12:52:09.212132 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 12:52:09.212144 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 12:52:09.212156 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 12:52:09.212167 systemd[1]: Stopped systemd-fsck-usr.service. May 13 12:52:09.212182 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:52:09.212194 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:52:09.212206 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:52:09.212218 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:52:09.212229 kernel: loop: module loaded May 13 12:52:09.212241 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 12:52:09.212253 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 12:52:09.212268 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:52:09.212280 systemd[1]: verity-setup.service: Deactivated successfully. May 13 12:52:09.212292 systemd[1]: Stopped verity-setup.service. May 13 12:52:09.212307 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:52:09.212320 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 12:52:09.212332 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 12:52:09.212344 systemd[1]: Mounted media.mount - External Media Directory. May 13 12:52:09.212356 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 12:52:09.212368 kernel: ACPI: bus type drm_connector registered May 13 12:52:09.212379 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 12:52:09.212519 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 12:52:09.212543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:52:09.212555 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 12:52:09.212567 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 12:52:09.212579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:52:09.212592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:52:09.212604 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:52:09.212616 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:52:09.212629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:52:09.212641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:52:09.212654 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:52:09.212666 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:52:09.212679 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:52:09.212691 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:52:09.212703 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 12:52:09.212717 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:52:09.212728 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 12:52:09.212743 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 12:52:09.212756 kernel: fuse: init (API version 7.41) May 13 12:52:09.212767 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:52:09.212781 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 12:52:09.212794 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 12:52:09.212806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:52:09.212837 systemd-journald[1145]: Collecting audit messages is disabled. May 13 12:52:09.213087 systemd-journald[1145]: Journal started May 13 12:52:09.213117 systemd-journald[1145]: Runtime Journal (/run/log/journal/77d0e71ea5fa4bbe81a58a788e30116c) is 8M, max 78.5M, 70.5M free. May 13 12:52:08.777801 systemd[1]: Queued start job for default target multi-user.target. May 13 12:52:08.797776 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 12:52:08.798227 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 12:52:09.216514 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 12:52:09.222023 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:52:09.222057 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 12:52:09.230514 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:52:09.234567 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:52:09.243525 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 12:52:09.247666 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:52:09.249544 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 12:52:09.250253 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 12:52:09.250429 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 12:52:09.254755 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 12:52:09.255742 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:52:09.257908 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 12:52:09.258727 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 12:52:09.282378 kernel: loop0: detected capacity change from 0 to 218376 May 13 12:52:09.276805 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 12:52:09.280797 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 12:52:09.284649 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 12:52:09.287592 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 12:52:09.293538 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:52:09.316151 systemd-journald[1145]: Time spent on flushing to /var/log/journal/77d0e71ea5fa4bbe81a58a788e30116c is 26.751ms for 978 entries. May 13 12:52:09.316151 systemd-journald[1145]: System Journal (/var/log/journal/77d0e71ea5fa4bbe81a58a788e30116c) is 8M, max 584.8M, 576.8M free. May 13 12:52:09.391912 systemd-journald[1145]: Received client request to flush runtime journal. May 13 12:52:09.391972 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 12:52:09.359860 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 12:52:09.393688 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 12:52:09.417542 kernel: loop1: detected capacity change from 0 to 146240 May 13 12:52:09.428178 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 12:52:09.433419 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:52:09.481802 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. May 13 12:52:09.481820 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. May 13 12:52:09.486901 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:52:09.491528 kernel: loop2: detected capacity change from 0 to 113872 May 13 12:52:09.541511 kernel: loop3: detected capacity change from 0 to 8 May 13 12:52:09.567502 kernel: loop4: detected capacity change from 0 to 218376 May 13 12:52:09.638523 kernel: loop5: detected capacity change from 0 to 146240 May 13 12:52:09.696505 kernel: loop6: detected capacity change from 0 to 113872 May 13 12:52:09.747516 kernel: loop7: detected capacity change from 0 to 8 May 13 12:52:09.748856 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 13 12:52:09.750965 (sd-merge)[1215]: Merged extensions into '/usr'. May 13 12:52:09.763777 systemd[1]: Reload requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... May 13 12:52:09.763806 systemd[1]: Reloading... May 13 12:52:09.868505 zram_generator::config[1237]: No configuration found. May 13 12:52:10.016684 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:52:10.130645 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 12:52:10.130934 systemd[1]: Reloading finished in 365 ms. May 13 12:52:10.154267 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 12:52:10.155208 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 12:52:10.160143 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 12:52:10.167677 systemd[1]: Starting ensure-sysext.service... May 13 12:52:10.170048 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:52:10.174349 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:52:10.195089 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 12:52:10.203871 systemd[1]: Reload requested from client PID 1298 ('systemctl') (unit ensure-sysext.service)... May 13 12:52:10.203887 systemd[1]: Reloading... May 13 12:52:10.218178 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 12:52:10.219525 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 12:52:10.219839 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 12:52:10.220162 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 12:52:10.221251 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 12:52:10.223066 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. May 13 12:52:10.223310 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. May 13 12:52:10.232561 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:52:10.232572 systemd-tmpfiles[1299]: Skipping /boot May 13 12:52:10.248678 systemd-udevd[1300]: Using default interface naming scheme 'v255'. May 13 12:52:10.253913 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:52:10.253924 systemd-tmpfiles[1299]: Skipping /boot May 13 12:52:10.275120 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 12:52:10.302515 zram_generator::config[1326]: No configuration found. May 13 12:52:10.504820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:52:10.515556 kernel: mousedev: PS/2 mouse device common for all mice May 13 12:52:10.532519 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 13 12:52:10.543527 kernel: ACPI: button: Power Button [PWRF] May 13 12:52:10.564516 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 12:52:10.579514 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 12:52:10.679096 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 12:52:10.679374 systemd[1]: Reloading finished in 475 ms. May 13 12:52:10.688310 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:52:10.689266 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 12:52:10.698185 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:52:10.734555 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:52:10.745328 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:52:10.747632 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:52:10.751745 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 12:52:10.753689 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:52:10.758378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:52:10.765870 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:52:10.774032 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:52:10.776761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:52:10.777973 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 12:52:10.779530 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:52:10.782219 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 12:52:10.787565 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:52:10.793866 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:52:10.800463 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 12:52:10.801377 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:52:10.813567 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:52:10.813799 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:52:10.830468 systemd[1]: Finished ensure-sysext.service. May 13 12:52:10.837530 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 13 12:52:10.838038 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:52:10.838296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:52:10.839557 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:52:10.845127 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 13 12:52:10.840267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:52:10.840383 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:52:10.845797 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 12:52:10.849745 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 12:52:10.850332 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 12:52:10.854893 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 12:52:10.859585 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:52:10.859768 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:52:10.860583 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:52:10.860737 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:52:10.861339 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:52:10.861398 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:52:10.884924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:52:10.888637 kernel: Console: switching to colour dummy device 80x25 May 13 12:52:10.888670 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 12:52:10.889127 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:52:10.889327 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:52:10.892205 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 13 12:52:10.892254 kernel: [drm] features: -context_init May 13 12:52:10.896779 kernel: [drm] number of scanouts: 1 May 13 12:52:10.896834 kernel: [drm] number of cap sets: 0 May 13 12:52:10.899587 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 May 13 12:52:10.907774 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 12:52:10.910058 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:52:10.910667 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:52:10.912188 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 12:52:10.914299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:52:10.952113 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 12:52:10.957196 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 12:52:10.958580 augenrules[1485]: No rules May 13 12:52:10.957445 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 12:52:10.958945 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:52:10.959150 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:52:10.984074 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 12:52:11.065373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:52:11.077380 systemd-networkd[1444]: lo: Link UP May 13 12:52:11.077724 systemd-networkd[1444]: lo: Gained carrier May 13 12:52:11.079021 systemd-networkd[1444]: Enumeration completed May 13 12:52:11.079186 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:52:11.079561 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:52:11.079569 systemd-networkd[1444]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:52:11.080218 systemd-networkd[1444]: eth0: Link UP May 13 12:52:11.080413 systemd-networkd[1444]: eth0: Gained carrier May 13 12:52:11.080510 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:52:11.082741 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 12:52:11.086658 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 12:52:11.092071 systemd-resolved[1446]: Positive Trust Anchors: May 13 12:52:11.092554 systemd-networkd[1444]: eth0: DHCPv4 address 172.24.4.211/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 12:52:11.093547 systemd-resolved[1446]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:52:11.093661 systemd-resolved[1446]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:52:11.104329 systemd-resolved[1446]: Using system hostname 'ci-9999-9-100-4cb33ef211.novalocal'. May 13 12:52:11.107960 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:52:11.108229 systemd[1]: Reached target network.target - Network. May 13 12:52:11.109633 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:52:11.109766 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 12:52:11.109908 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:52:11.110051 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 12:52:11.110136 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 12:52:11.110207 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 13 12:52:11.110269 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 12:52:11.110321 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 12:52:11.110344 systemd[1]: Reached target paths.target - Path Units. May 13 12:52:11.110394 systemd[1]: Reached target time-set.target - System Time Set. May 13 12:52:11.110685 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 12:52:11.110904 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 12:52:11.110992 systemd[1]: Reached target timers.target - Timer Units. May 13 12:52:11.112880 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 12:52:11.114830 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 12:52:11.117744 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 12:52:11.117980 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 12:52:11.118068 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 12:52:11.120208 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 12:52:11.121106 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 12:52:11.122007 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 12:52:11.122219 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 12:52:11.123527 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:52:11.123621 systemd[1]: Reached target basic.target - Basic System. May 13 12:52:11.123775 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 12:52:11.123806 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 12:52:11.124919 systemd[1]: Starting containerd.service - containerd container runtime... May 13 12:52:11.127614 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 12:52:11.129326 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 12:52:11.131643 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 12:52:11.136648 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 12:52:11.138455 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 12:52:11.138574 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 12:52:11.141756 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 13 12:52:11.143643 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 12:52:11.145660 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 12:52:11.147912 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 12:52:11.152511 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:52:11.155531 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 12:52:11.162677 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Refreshing passwd entry cache May 13 12:52:11.163304 oslogin_cache_refresh[1517]: Refreshing passwd entry cache May 13 12:52:11.163731 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 12:52:11.164989 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 12:52:11.170190 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 12:52:11.176145 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Failure getting users, quitting May 13 12:52:11.176145 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 12:52:11.176145 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Refreshing group entry cache May 13 12:52:11.175706 systemd[1]: Starting update-engine.service - Update Engine... May 13 12:52:11.175684 oslogin_cache_refresh[1517]: Failure getting users, quitting May 13 12:52:11.175700 oslogin_cache_refresh[1517]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 13 12:52:11.175741 oslogin_cache_refresh[1517]: Refreshing group entry cache May 13 12:52:11.181922 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Failure getting groups, quitting May 13 12:52:11.181972 oslogin_cache_refresh[1517]: Failure getting groups, quitting May 13 12:52:11.182025 google_oslogin_nss_cache[1517]: oslogin_cache_refresh[1517]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 12:52:11.182074 oslogin_cache_refresh[1517]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 13 12:52:11.182611 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 12:52:11.184569 extend-filesystems[1516]: Found loop4 May 13 12:52:11.184569 extend-filesystems[1516]: Found loop5 May 13 12:52:11.184569 extend-filesystems[1516]: Found loop6 May 13 12:52:11.184569 extend-filesystems[1516]: Found loop7 May 13 12:52:11.184569 extend-filesystems[1516]: Found vda May 13 12:52:11.184569 extend-filesystems[1516]: Found vda1 May 13 12:52:11.184569 extend-filesystems[1516]: Found vda2 May 13 12:52:11.184569 extend-filesystems[1516]: Found vda3 May 13 12:52:11.184569 extend-filesystems[1516]: Found usr May 13 12:52:11.184569 extend-filesystems[1516]: Found vda4 May 13 12:52:11.184569 extend-filesystems[1516]: Found vda6 May 13 12:52:11.184569 extend-filesystems[1516]: Found vda7 May 13 12:52:11.184569 extend-filesystems[1516]: Found vda9 May 13 12:52:11.184569 extend-filesystems[1516]: Checking size of /dev/vda9 May 13 12:52:11.192530 jq[1515]: false May 13 12:52:11.194712 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 12:52:11.195013 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 12:52:11.195554 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 12:52:11.195804 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 13 12:52:11.196634 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 13 12:52:11.197762 extend-filesystems[1516]: Resized partition /dev/vda9 May 13 12:52:11.205570 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 12:52:11.208455 extend-filesystems[1538]: resize2fs 1.47.2 (1-Jan-2025) May 13 12:52:11.210567 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 12:52:11.236633 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 13 12:52:11.239114 jq[1527]: true May 13 12:52:11.245050 update_engine[1526]: I20250513 12:52:11.243032 1526 main.cc:92] Flatcar Update Engine starting May 13 12:52:11.254698 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 13 12:52:11.274805 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 12:52:11.294628 jq[1553]: true May 13 12:52:11.275366 systemd[1]: motdgen.service: Deactivated successfully. May 13 12:52:11.276072 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 12:52:11.294122 systemd-timesyncd[1456]: Contacted time server 104.131.139.195:123 (0.flatcar.pool.ntp.org). May 13 12:52:11.294172 systemd-timesyncd[1456]: Initial clock synchronization to Tue 2025-05-13 12:52:11.664531 UTC. May 13 12:52:11.297002 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 12:52:11.297002 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 12:52:11.297002 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 13 12:52:11.299542 extend-filesystems[1516]: Resized filesystem in /dev/vda9 May 13 12:52:11.298101 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 12:52:11.298638 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 12:52:11.308142 tar[1537]: linux-amd64/LICENSE May 13 12:52:11.308142 tar[1537]: linux-amd64/helm May 13 12:52:11.324409 dbus-daemon[1513]: [system] SELinux support is enabled May 13 12:52:11.326592 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 12:52:11.330586 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 12:52:11.330621 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 12:52:11.331140 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 12:52:11.331158 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 12:52:11.340869 systemd[1]: Started update-engine.service - Update Engine. May 13 12:52:11.345513 update_engine[1526]: I20250513 12:52:11.344785 1526 update_check_scheduler.cc:74] Next update check in 6m32s May 13 12:52:11.356566 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 12:52:11.386885 systemd-logind[1524]: New seat seat0. May 13 12:52:11.391888 systemd-logind[1524]: Watching system buttons on /dev/input/event2 (Power Button) May 13 12:52:11.391981 systemd-logind[1524]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 12:52:11.392199 systemd[1]: Started systemd-logind.service - User Login Management. May 13 12:52:11.443975 bash[1576]: Updated "/home/core/.ssh/authorized_keys" May 13 12:52:11.446932 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 12:52:11.451811 systemd[1]: Starting sshkeys.service... May 13 12:52:11.483001 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 13 12:52:11.488206 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 13 12:52:11.508951 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:52:11.594167 locksmithd[1572]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 12:52:11.757645 containerd[1551]: time="2025-05-13T12:52:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 12:52:11.759486 containerd[1551]: time="2025-05-13T12:52:11.759428152Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 12:52:11.787496 containerd[1551]: time="2025-05-13T12:52:11.786677248Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.862µs" May 13 12:52:11.788127 containerd[1551]: time="2025-05-13T12:52:11.788107861Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 12:52:11.789977 containerd[1551]: time="2025-05-13T12:52:11.788527778Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 12:52:11.789977 containerd[1551]: time="2025-05-13T12:52:11.788712425Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 12:52:11.789977 containerd[1551]: time="2025-05-13T12:52:11.788732843Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 12:52:11.789977 containerd[1551]: time="2025-05-13T12:52:11.788760765Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:52:11.789977 containerd[1551]: time="2025-05-13T12:52:11.788824184Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:52:11.789977 containerd[1551]: time="2025-05-13T12:52:11.788838972Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:52:11.789977 containerd[1551]: time="2025-05-13T12:52:11.789088630Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:52:11.789977 containerd[1551]: time="2025-05-13T12:52:11.789104059Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:52:11.789977 containerd[1551]: time="2025-05-13T12:52:11.789115801Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:52:11.789977 containerd[1551]: time="2025-05-13T12:52:11.789125640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 12:52:11.789977 containerd[1551]: time="2025-05-13T12:52:11.789202794Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 12:52:11.791559 sshd_keygen[1534]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 12:52:11.792281 containerd[1551]: time="2025-05-13T12:52:11.792240101Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:52:11.792321 containerd[1551]: time="2025-05-13T12:52:11.792300384Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:52:11.792321 containerd[1551]: time="2025-05-13T12:52:11.792315863Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 12:52:11.792396 containerd[1551]: time="2025-05-13T12:52:11.792375455Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 12:52:11.793653 containerd[1551]: time="2025-05-13T12:52:11.793626090Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 12:52:11.793722 containerd[1551]: time="2025-05-13T12:52:11.793700069Z" level=info msg="metadata content store policy set" policy=shared May 13 12:52:11.812677 containerd[1551]: time="2025-05-13T12:52:11.812628740Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 12:52:11.812942 containerd[1551]: time="2025-05-13T12:52:11.812924234Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 12:52:11.813341 containerd[1551]: time="2025-05-13T12:52:11.813324364Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 12:52:11.813413 containerd[1551]: time="2025-05-13T12:52:11.813397822Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.813463295Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.813846824Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.813862644Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.813877842Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.813892330Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.813903701Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.813915002Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.813928517Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.814056998Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.814078729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.814094288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.814105529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.814116941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 12:52:11.815215 containerd[1551]: time="2025-05-13T12:52:11.814128492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 12:52:11.815553 containerd[1551]: time="2025-05-13T12:52:11.814141056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 12:52:11.815553 containerd[1551]: time="2025-05-13T12:52:11.814152137Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 12:52:11.815553 containerd[1551]: time="2025-05-13T12:52:11.814165602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 12:52:11.815553 containerd[1551]: time="2025-05-13T12:52:11.814177725Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 12:52:11.815553 containerd[1551]: time="2025-05-13T12:52:11.814190088Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 12:52:11.815553 containerd[1551]: time="2025-05-13T12:52:11.814258546Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 12:52:11.815553 containerd[1551]: time="2025-05-13T12:52:11.814272983Z" level=info msg="Start snapshots syncer" May 13 12:52:11.815553 containerd[1551]: time="2025-05-13T12:52:11.814303691Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 12:52:11.815716 containerd[1551]: time="2025-05-13T12:52:11.814592793Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 12:52:11.815716 containerd[1551]: time="2025-05-13T12:52:11.814649660Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 12:52:11.818833 containerd[1551]: time="2025-05-13T12:52:11.818810043Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 12:52:11.818928 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 12:52:11.819576 containerd[1551]: time="2025-05-13T12:52:11.819556974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.819805810Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.819828894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.819850053Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.819865001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.819877204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.819890008Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.819922950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.819935483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.819947706Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.819979085Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.819995185Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.820004603Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.820014261Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:52:11.821025 containerd[1551]: time="2025-05-13T12:52:11.820022847Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 12:52:11.821410 containerd[1551]: time="2025-05-13T12:52:11.820032716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 12:52:11.821410 containerd[1551]: time="2025-05-13T12:52:11.820046912Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 12:52:11.821410 containerd[1551]: time="2025-05-13T12:52:11.820063504Z" level=info msg="runtime interface created" May 13 12:52:11.821410 containerd[1551]: time="2025-05-13T12:52:11.820069104Z" level=info msg="created NRI interface" May 13 12:52:11.821410 containerd[1551]: time="2025-05-13T12:52:11.820077340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 12:52:11.821410 containerd[1551]: time="2025-05-13T12:52:11.820095894Z" level=info msg="Connect containerd service" May 13 12:52:11.821410 containerd[1551]: time="2025-05-13T12:52:11.820128986Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 12:52:11.822689 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 12:52:11.823932 containerd[1551]: time="2025-05-13T12:52:11.823834977Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:52:11.845981 systemd[1]: issuegen.service: Deactivated successfully. May 13 12:52:11.846335 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 12:52:11.852883 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 12:52:11.888224 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 12:52:11.890191 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 12:52:11.891878 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 12:52:11.892405 systemd[1]: Reached target getty.target - Login Prompts. May 13 12:52:12.040596 containerd[1551]: time="2025-05-13T12:52:12.040500726Z" level=info msg="Start subscribing containerd event" May 13 12:52:12.040596 containerd[1551]: time="2025-05-13T12:52:12.040573005Z" level=info msg="Start recovering state" May 13 12:52:12.040778 containerd[1551]: time="2025-05-13T12:52:12.040689127Z" level=info msg="Start event monitor" May 13 12:52:12.040778 containerd[1551]: time="2025-05-13T12:52:12.040707000Z" level=info msg="Start cni network conf syncer for default" May 13 12:52:12.040778 containerd[1551]: time="2025-05-13T12:52:12.040720204Z" level=info msg="Start streaming server" May 13 12:52:12.040778 containerd[1551]: time="2025-05-13T12:52:12.040739770Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 12:52:12.040778 containerd[1551]: time="2025-05-13T12:52:12.040747866Z" level=info msg="runtime interface starting up..." May 13 12:52:12.040778 containerd[1551]: time="2025-05-13T12:52:12.040754499Z" level=info msg="starting plugins..." May 13 12:52:12.040778 containerd[1551]: time="2025-05-13T12:52:12.040768965Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 12:52:12.041262 containerd[1551]: time="2025-05-13T12:52:12.041224630Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 12:52:12.041405 containerd[1551]: time="2025-05-13T12:52:12.041357309Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 12:52:12.041710 systemd[1]: Started containerd.service - containerd container runtime. May 13 12:52:12.044816 containerd[1551]: time="2025-05-13T12:52:12.044794493Z" level=info msg="containerd successfully booted in 0.287900s" May 13 12:52:12.089630 tar[1537]: linux-amd64/README.md May 13 12:52:12.117357 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 12:52:12.449562 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:52:12.569586 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:52:12.862604 systemd-networkd[1444]: eth0: Gained IPv6LL May 13 12:52:12.867869 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 12:52:12.870233 systemd[1]: Reached target network-online.target - Network is Online. May 13 12:52:12.875428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:52:12.879232 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 12:52:12.936665 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 12:52:14.468555 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:52:14.595570 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:52:15.397485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:52:15.414084 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:52:17.023264 login[1613]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 12:52:17.029891 login[1614]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 12:52:17.047928 systemd-logind[1524]: New session 1 of user core. May 13 12:52:17.050314 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 12:52:17.053669 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 12:52:17.059120 systemd-logind[1524]: New session 2 of user core. May 13 12:52:17.060451 kubelet[1645]: E0513 12:52:17.060317 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:52:17.063989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:52:17.064143 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:52:17.064801 systemd[1]: kubelet.service: Consumed 2.357s CPU time, 251.7M memory peak. May 13 12:52:17.076077 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 12:52:17.079033 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 12:52:17.089906 (systemd)[1658]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 12:52:17.092318 systemd-logind[1524]: New session c1 of user core. May 13 12:52:17.256929 systemd[1658]: Queued start job for default target default.target. May 13 12:52:17.279852 systemd[1658]: Created slice app.slice - User Application Slice. May 13 12:52:17.279998 systemd[1658]: Reached target paths.target - Paths. May 13 12:52:17.280048 systemd[1658]: Reached target timers.target - Timers. May 13 12:52:17.281355 systemd[1658]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 12:52:17.328743 systemd[1658]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 12:52:17.329203 systemd[1658]: Reached target sockets.target - Sockets. May 13 12:52:17.329547 systemd[1658]: Reached target basic.target - Basic System. May 13 12:52:17.329856 systemd[1658]: Reached target default.target - Main User Target. May 13 12:52:17.329954 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 12:52:17.331306 systemd[1658]: Startup finished in 231ms. May 13 12:52:17.343020 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 12:52:17.346196 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 12:52:17.513817 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 12:52:17.516987 systemd[1]: Started sshd@0-172.24.4.211:22-172.24.4.1:53784.service - OpenSSH per-connection server daemon (172.24.4.1:53784). May 13 12:52:18.500560 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:52:18.518976 coreos-metadata[1512]: May 13 12:52:18.518 WARN failed to locate config-drive, using the metadata service API instead May 13 12:52:18.526561 sshd[1689]: Accepted publickey for core from 172.24.4.1 port 53784 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:52:18.529627 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:52:18.543591 systemd-logind[1524]: New session 3 of user core. May 13 12:52:18.549956 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 12:52:18.571173 coreos-metadata[1512]: May 13 12:52:18.571 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 13 12:52:18.619563 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 13 12:52:18.638098 coreos-metadata[1579]: May 13 12:52:18.637 WARN failed to locate config-drive, using the metadata service API instead May 13 12:52:18.681280 coreos-metadata[1579]: May 13 12:52:18.681 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 13 12:52:18.761219 coreos-metadata[1512]: May 13 12:52:18.760 INFO Fetch successful May 13 12:52:18.761219 coreos-metadata[1512]: May 13 12:52:18.761 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 12:52:18.775134 coreos-metadata[1512]: May 13 12:52:18.775 INFO Fetch successful May 13 12:52:18.775385 coreos-metadata[1512]: May 13 12:52:18.775 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 13 12:52:18.789823 coreos-metadata[1512]: May 13 12:52:18.789 INFO Fetch successful May 13 12:52:18.789823 coreos-metadata[1512]: May 13 12:52:18.789 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 13 12:52:18.803740 coreos-metadata[1512]: May 13 12:52:18.803 INFO Fetch successful May 13 12:52:18.803740 coreos-metadata[1512]: May 13 12:52:18.803 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 13 12:52:18.817342 coreos-metadata[1512]: May 13 12:52:18.817 INFO Fetch successful May 13 12:52:18.817342 coreos-metadata[1512]: May 13 12:52:18.817 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 13 12:52:18.824923 coreos-metadata[1579]: May 13 12:52:18.824 INFO Fetch successful May 13 12:52:18.825031 coreos-metadata[1579]: May 13 12:52:18.824 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 12:52:18.831970 coreos-metadata[1512]: May 13 12:52:18.831 INFO Fetch successful May 13 12:52:18.837623 coreos-metadata[1579]: May 13 12:52:18.837 INFO Fetch successful May 13 12:52:18.844245 unknown[1579]: wrote ssh authorized keys file for user: core May 13 12:52:18.885261 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 12:52:18.888088 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 12:52:18.905593 update-ssh-keys[1699]: Updated "/home/core/.ssh/authorized_keys" May 13 12:52:18.906013 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 13 12:52:18.911601 systemd[1]: Finished sshkeys.service. May 13 12:52:18.913407 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 12:52:18.913764 systemd[1]: Startup finished in 3.681s (kernel) + 16.172s (initrd) + 11.067s (userspace) = 30.921s. May 13 12:52:19.167078 systemd[1]: Started sshd@1-172.24.4.211:22-172.24.4.1:53786.service - OpenSSH per-connection server daemon (172.24.4.1:53786). May 13 12:52:20.659608 sshd[1707]: Accepted publickey for core from 172.24.4.1 port 53786 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:52:20.662226 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:52:20.675594 systemd-logind[1524]: New session 4 of user core. May 13 12:52:20.683777 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 12:52:21.454947 sshd[1709]: Connection closed by 172.24.4.1 port 53786 May 13 12:52:21.457308 sshd-session[1707]: pam_unix(sshd:session): session closed for user core May 13 12:52:21.473328 systemd[1]: sshd@1-172.24.4.211:22-172.24.4.1:53786.service: Deactivated successfully. May 13 12:52:21.477484 systemd[1]: session-4.scope: Deactivated successfully. May 13 12:52:21.479865 systemd-logind[1524]: Session 4 logged out. Waiting for processes to exit. May 13 12:52:21.486618 systemd[1]: Started sshd@2-172.24.4.211:22-172.24.4.1:53798.service - OpenSSH per-connection server daemon (172.24.4.1:53798). May 13 12:52:21.489317 systemd-logind[1524]: Removed session 4. May 13 12:52:22.697596 sshd[1715]: Accepted publickey for core from 172.24.4.1 port 53798 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:52:22.700197 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:52:22.710888 systemd-logind[1524]: New session 5 of user core. May 13 12:52:22.731854 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 12:52:23.450070 sshd[1717]: Connection closed by 172.24.4.1 port 53798 May 13 12:52:23.451083 sshd-session[1715]: pam_unix(sshd:session): session closed for user core May 13 12:52:23.468928 systemd[1]: sshd@2-172.24.4.211:22-172.24.4.1:53798.service: Deactivated successfully. May 13 12:52:23.472109 systemd[1]: session-5.scope: Deactivated successfully. May 13 12:52:23.475600 systemd-logind[1524]: Session 5 logged out. Waiting for processes to exit. May 13 12:52:23.480760 systemd[1]: Started sshd@3-172.24.4.211:22-172.24.4.1:47258.service - OpenSSH per-connection server daemon (172.24.4.1:47258). May 13 12:52:23.483382 systemd-logind[1524]: Removed session 5. May 13 12:52:24.669204 sshd[1723]: Accepted publickey for core from 172.24.4.1 port 47258 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:52:24.672267 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:52:24.683578 systemd-logind[1524]: New session 6 of user core. May 13 12:52:24.696948 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 12:52:25.461546 sshd[1725]: Connection closed by 172.24.4.1 port 47258 May 13 12:52:25.462864 sshd-session[1723]: pam_unix(sshd:session): session closed for user core May 13 12:52:25.478162 systemd[1]: sshd@3-172.24.4.211:22-172.24.4.1:47258.service: Deactivated successfully. May 13 12:52:25.482221 systemd[1]: session-6.scope: Deactivated successfully. May 13 12:52:25.484375 systemd-logind[1524]: Session 6 logged out. Waiting for processes to exit. May 13 12:52:25.490639 systemd[1]: Started sshd@4-172.24.4.211:22-172.24.4.1:47264.service - OpenSSH per-connection server daemon (172.24.4.1:47264). May 13 12:52:25.493093 systemd-logind[1524]: Removed session 6. May 13 12:52:26.886145 sshd[1731]: Accepted publickey for core from 172.24.4.1 port 47264 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:52:26.889012 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:52:26.900976 systemd-logind[1524]: New session 7 of user core. May 13 12:52:26.906827 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 12:52:27.085018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 12:52:27.088885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:52:27.524087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:52:27.539190 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:52:27.547478 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 12:52:27.548803 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:52:27.570791 sudo[1739]: pam_unix(sudo:session): session closed for user root May 13 12:52:27.633904 kubelet[1742]: E0513 12:52:27.633779 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:52:27.636800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:52:27.636947 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:52:27.637610 systemd[1]: kubelet.service: Consumed 331ms CPU time, 102.8M memory peak. May 13 12:52:27.771526 sshd[1733]: Connection closed by 172.24.4.1 port 47264 May 13 12:52:27.772380 sshd-session[1731]: pam_unix(sshd:session): session closed for user core May 13 12:52:27.801070 systemd[1]: sshd@4-172.24.4.211:22-172.24.4.1:47264.service: Deactivated successfully. May 13 12:52:27.804832 systemd[1]: session-7.scope: Deactivated successfully. May 13 12:52:27.807258 systemd-logind[1524]: Session 7 logged out. Waiting for processes to exit. May 13 12:52:27.813261 systemd[1]: Started sshd@5-172.24.4.211:22-172.24.4.1:47268.service - OpenSSH per-connection server daemon (172.24.4.1:47268). May 13 12:52:27.816237 systemd-logind[1524]: Removed session 7. May 13 12:52:29.085141 sshd[1755]: Accepted publickey for core from 172.24.4.1 port 47268 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:52:29.088017 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:52:29.099457 systemd-logind[1524]: New session 8 of user core. May 13 12:52:29.110879 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 12:52:29.661269 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 12:52:29.662702 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:52:29.675314 sudo[1759]: pam_unix(sudo:session): session closed for user root May 13 12:52:29.687061 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 12:52:29.688329 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:52:29.710114 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:52:29.800573 augenrules[1781]: No rules May 13 12:52:29.804005 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:52:29.804521 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:52:29.807764 sudo[1758]: pam_unix(sudo:session): session closed for user root May 13 12:52:30.067647 sshd[1757]: Connection closed by 172.24.4.1 port 47268 May 13 12:52:30.067699 sshd-session[1755]: pam_unix(sshd:session): session closed for user core May 13 12:52:30.084528 systemd[1]: sshd@5-172.24.4.211:22-172.24.4.1:47268.service: Deactivated successfully. May 13 12:52:30.088238 systemd[1]: session-8.scope: Deactivated successfully. May 13 12:52:30.090877 systemd-logind[1524]: Session 8 logged out. Waiting for processes to exit. May 13 12:52:30.096314 systemd[1]: Started sshd@6-172.24.4.211:22-172.24.4.1:47284.service - OpenSSH per-connection server daemon (172.24.4.1:47284). May 13 12:52:30.098191 systemd-logind[1524]: Removed session 8. May 13 12:52:31.386375 sshd[1790]: Accepted publickey for core from 172.24.4.1 port 47284 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:52:31.389473 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:52:31.402592 systemd-logind[1524]: New session 9 of user core. May 13 12:52:31.410826 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 12:52:31.957317 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 12:52:31.958702 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:52:32.606595 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 12:52:32.624075 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 12:52:33.035446 dockerd[1812]: time="2025-05-13T12:52:33.034954776Z" level=info msg="Starting up" May 13 12:52:33.037415 dockerd[1812]: time="2025-05-13T12:52:33.037013141Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 12:52:33.100378 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3090633828-merged.mount: Deactivated successfully. May 13 12:52:33.139622 systemd[1]: var-lib-docker-metacopy\x2dcheck1972315715-merged.mount: Deactivated successfully. May 13 12:52:33.179305 dockerd[1812]: time="2025-05-13T12:52:33.179248454Z" level=info msg="Loading containers: start." May 13 12:52:33.197193 kernel: Initializing XFRM netlink socket May 13 12:52:33.773425 systemd-networkd[1444]: docker0: Link UP May 13 12:52:33.783639 dockerd[1812]: time="2025-05-13T12:52:33.783512628Z" level=info msg="Loading containers: done." May 13 12:52:33.819129 dockerd[1812]: time="2025-05-13T12:52:33.818961689Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 12:52:33.819129 dockerd[1812]: time="2025-05-13T12:52:33.819116782Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 12:52:33.819443 dockerd[1812]: time="2025-05-13T12:52:33.819345622Z" level=info msg="Initializing buildkit" May 13 12:52:33.873544 dockerd[1812]: time="2025-05-13T12:52:33.873403342Z" level=info msg="Completed buildkit initialization" May 13 12:52:33.892648 dockerd[1812]: time="2025-05-13T12:52:33.892535108Z" level=info msg="Daemon has completed initialization" May 13 12:52:33.892822 dockerd[1812]: time="2025-05-13T12:52:33.892727472Z" level=info msg="API listen on /run/docker.sock" May 13 12:52:33.893998 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 12:52:34.089638 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1704439894-merged.mount: Deactivated successfully. May 13 12:52:35.705776 containerd[1551]: time="2025-05-13T12:52:35.705546133Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 12:52:36.516128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241499695.mount: Deactivated successfully. May 13 12:52:37.834531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 12:52:37.836829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:52:37.974348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:52:37.983840 (kubelet)[2078]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:52:38.241637 kubelet[2078]: E0513 12:52:38.241100 2078 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:52:38.245356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:52:38.245628 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:52:38.246439 systemd[1]: kubelet.service: Consumed 161ms CPU time, 101.3M memory peak. May 13 12:52:38.443097 containerd[1551]: time="2025-05-13T12:52:38.443011341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:38.444417 containerd[1551]: time="2025-05-13T12:52:38.444374822Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" May 13 12:52:38.445642 containerd[1551]: time="2025-05-13T12:52:38.445564813Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:38.449274 containerd[1551]: time="2025-05-13T12:52:38.449197799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:38.451444 containerd[1551]: time="2025-05-13T12:52:38.451392370Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.745785462s" May 13 12:52:38.451444 containerd[1551]: time="2025-05-13T12:52:38.451431864Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 13 12:52:38.452174 containerd[1551]: time="2025-05-13T12:52:38.452139361Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 12:52:40.486955 containerd[1551]: time="2025-05-13T12:52:40.486298869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:40.488471 containerd[1551]: time="2025-05-13T12:52:40.488420843Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" May 13 12:52:40.489852 containerd[1551]: time="2025-05-13T12:52:40.489790966Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:40.492751 containerd[1551]: time="2025-05-13T12:52:40.492727200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:40.493946 containerd[1551]: time="2025-05-13T12:52:40.493718308Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.041463848s" May 13 12:52:40.493946 containerd[1551]: time="2025-05-13T12:52:40.493760588Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 13 12:52:40.495185 containerd[1551]: time="2025-05-13T12:52:40.495005972Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 12:52:42.397452 containerd[1551]: time="2025-05-13T12:52:42.396853314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:42.400435 containerd[1551]: time="2025-05-13T12:52:42.400391210Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" May 13 12:52:42.401726 containerd[1551]: time="2025-05-13T12:52:42.401675641Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:42.405776 containerd[1551]: time="2025-05-13T12:52:42.405725727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:42.406966 containerd[1551]: time="2025-05-13T12:52:42.406935471Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.911899962s" May 13 12:52:42.407057 containerd[1551]: time="2025-05-13T12:52:42.407041485Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 13 12:52:42.416259 containerd[1551]: time="2025-05-13T12:52:42.415940753Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 12:52:43.813931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196844886.mount: Deactivated successfully. May 13 12:52:44.598509 containerd[1551]: time="2025-05-13T12:52:44.598009494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:44.599629 containerd[1551]: time="2025-05-13T12:52:44.599608505Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" May 13 12:52:44.601197 containerd[1551]: time="2025-05-13T12:52:44.601174469Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:44.603707 containerd[1551]: time="2025-05-13T12:52:44.603686790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:44.604452 containerd[1551]: time="2025-05-13T12:52:44.604284873Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.188303716s" May 13 12:52:44.604452 containerd[1551]: time="2025-05-13T12:52:44.604335754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 12:52:44.605393 containerd[1551]: time="2025-05-13T12:52:44.605219836Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 12:52:45.254628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount475140815.mount: Deactivated successfully. May 13 12:52:46.635960 containerd[1551]: time="2025-05-13T12:52:46.635912718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:46.638473 containerd[1551]: time="2025-05-13T12:52:46.638454317Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 13 12:52:46.639750 containerd[1551]: time="2025-05-13T12:52:46.639727192Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:46.643659 containerd[1551]: time="2025-05-13T12:52:46.643632004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:46.644918 containerd[1551]: time="2025-05-13T12:52:46.644870612Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.039600007s" May 13 12:52:46.644977 containerd[1551]: time="2025-05-13T12:52:46.644919019Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 13 12:52:46.646330 containerd[1551]: time="2025-05-13T12:52:46.646298619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 12:52:47.221423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2542783730.mount: Deactivated successfully. May 13 12:52:47.226052 containerd[1551]: time="2025-05-13T12:52:47.225664446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:52:47.228066 containerd[1551]: time="2025-05-13T12:52:47.227982274Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 13 12:52:47.230682 containerd[1551]: time="2025-05-13T12:52:47.230576496Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:52:47.236024 containerd[1551]: time="2025-05-13T12:52:47.235876971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:52:47.238549 containerd[1551]: time="2025-05-13T12:52:47.237907657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 591.560561ms" May 13 12:52:47.238549 containerd[1551]: time="2025-05-13T12:52:47.237982436Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 13 12:52:47.239104 containerd[1551]: time="2025-05-13T12:52:47.239055686Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 12:52:48.231823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820570762.mount: Deactivated successfully. May 13 12:52:48.335878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 12:52:48.341202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:52:48.839281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:52:48.851854 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:52:48.927014 kubelet[2178]: E0513 12:52:48.926961 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:52:48.929812 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:52:48.930031 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:52:48.930393 systemd[1]: kubelet.service: Consumed 186ms CPU time, 103.8M memory peak. May 13 12:52:50.980021 containerd[1551]: time="2025-05-13T12:52:50.979915188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:50.981535 containerd[1551]: time="2025-05-13T12:52:50.981353507Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" May 13 12:52:50.982849 containerd[1551]: time="2025-05-13T12:52:50.982790872Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:50.986379 containerd[1551]: time="2025-05-13T12:52:50.986304540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:52:50.988198 containerd[1551]: time="2025-05-13T12:52:50.987525772Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.74797373s" May 13 12:52:50.988198 containerd[1551]: time="2025-05-13T12:52:50.987559220Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 13 12:52:54.741649 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:52:54.742098 systemd[1]: kubelet.service: Consumed 186ms CPU time, 103.8M memory peak. May 13 12:52:54.751115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:52:54.816569 systemd[1]: Reload requested from client PID 2256 ('systemctl') (unit session-9.scope)... May 13 12:52:54.816588 systemd[1]: Reloading... May 13 12:52:54.929534 zram_generator::config[2304]: No configuration found. May 13 12:52:55.050170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:52:55.190289 systemd[1]: Reloading finished in 373 ms. May 13 12:52:55.248614 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 12:52:55.248809 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 12:52:55.249333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:52:55.249440 systemd[1]: kubelet.service: Consumed 136ms CPU time, 91.8M memory peak. May 13 12:52:55.252457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:52:56.510752 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:52:56.532220 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:52:56.610524 kubelet[2366]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:52:56.610524 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 12:52:56.610524 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:52:56.610524 kubelet[2366]: I0513 12:52:56.610041 2366 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:52:56.991209 kubelet[2366]: I0513 12:52:56.991154 2366 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 12:52:56.991339 kubelet[2366]: I0513 12:52:56.991221 2366 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:52:56.992435 kubelet[2366]: I0513 12:52:56.992415 2366 server.go:954] "Client rotation is on, will bootstrap in background" May 13 12:52:57.006858 update_engine[1526]: I20250513 12:52:57.006817 1526 update_attempter.cc:509] Updating boot flags... May 13 12:52:57.033692 kubelet[2366]: E0513 12:52:57.033348 2366 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.211:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.211:6443: connect: connection refused" logger="UnhandledError" May 13 12:52:57.037512 kubelet[2366]: I0513 12:52:57.037085 2366 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:52:57.086698 kubelet[2366]: I0513 12:52:57.086664 2366 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:52:57.095508 kubelet[2366]: I0513 12:52:57.093883 2366 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:52:57.095508 kubelet[2366]: I0513 12:52:57.094060 2366 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:52:57.095508 kubelet[2366]: I0513 12:52:57.094086 2366 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-9-100-4cb33ef211.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:52:57.095508 kubelet[2366]: I0513 12:52:57.094261 2366 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:52:57.095755 kubelet[2366]: I0513 12:52:57.094271 2366 container_manager_linux.go:304] "Creating device plugin manager" May 13 12:52:57.095755 kubelet[2366]: I0513 12:52:57.094378 2366 state_mem.go:36] "Initialized new in-memory state store" May 13 12:52:57.103733 kubelet[2366]: I0513 12:52:57.103705 2366 kubelet.go:446] "Attempting to sync node with API server" May 13 12:52:57.103844 kubelet[2366]: I0513 12:52:57.103825 2366 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:52:57.103879 kubelet[2366]: I0513 12:52:57.103857 2366 kubelet.go:352] "Adding apiserver pod source" May 13 12:52:57.103879 kubelet[2366]: I0513 12:52:57.103869 2366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:52:57.124328 kubelet[2366]: W0513 12:52:57.124266 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-9-100-4cb33ef211.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused May 13 12:52:57.124437 kubelet[2366]: E0513 12:52:57.124336 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.211:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-9-100-4cb33ef211.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.211:6443: connect: connection refused" logger="UnhandledError" May 13 12:52:57.126512 kubelet[2366]: I0513 12:52:57.126463 2366 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:52:57.126921 kubelet[2366]: I0513 12:52:57.126897 2366 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:52:57.128081 kubelet[2366]: W0513 12:52:57.128058 2366 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 12:52:57.139508 kubelet[2366]: I0513 12:52:57.138616 2366 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 12:52:57.139508 kubelet[2366]: I0513 12:52:57.138656 2366 server.go:1287] "Started kubelet" May 13 12:52:57.153835 kubelet[2366]: I0513 12:52:57.153773 2366 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:52:57.155600 kubelet[2366]: I0513 12:52:57.154841 2366 server.go:490] "Adding debug handlers to kubelet server" May 13 12:52:57.157289 kubelet[2366]: W0513 12:52:57.157239 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.211:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused May 13 12:52:57.161915 kubelet[2366]: E0513 12:52:57.161854 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.211:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.211:6443: connect: connection refused" logger="UnhandledError" May 13 12:52:57.162055 kubelet[2366]: I0513 12:52:57.160986 2366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:52:57.167446 kubelet[2366]: I0513 12:52:57.157728 2366 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:52:57.168377 kubelet[2366]: I0513 12:52:57.168358 2366 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:52:57.168509 kubelet[2366]: I0513 12:52:57.161363 2366 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:52:57.169509 kubelet[2366]: E0513 12:52:57.167468 2366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.211:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.211:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-9999-9-100-4cb33ef211.novalocal.183f1747b1f2bfe5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-9999-9-100-4cb33ef211.novalocal,UID:ci-9999-9-100-4cb33ef211.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-9999-9-100-4cb33ef211.novalocal,},FirstTimestamp:2025-05-13 12:52:57.138634725 +0000 UTC m=+0.598337011,LastTimestamp:2025-05-13 12:52:57.138634725 +0000 UTC m=+0.598337011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999-9-100-4cb33ef211.novalocal,}" May 13 12:52:57.169509 kubelet[2366]: I0513 12:52:57.169195 2366 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 12:52:57.169509 kubelet[2366]: E0513 12:52:57.169406 2366 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" May 13 12:52:57.170217 kubelet[2366]: E0513 12:52:57.169735 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-100-4cb33ef211.novalocal?timeout=10s\": dial tcp 172.24.4.211:6443: connect: connection refused" interval="200ms" May 13 12:52:57.170266 kubelet[2366]: W0513 12:52:57.170203 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused May 13 12:52:57.170266 kubelet[2366]: E0513 12:52:57.170252 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.211:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.211:6443: connect: connection refused" logger="UnhandledError" May 13 12:52:57.176356 kubelet[2366]: I0513 12:52:57.176314 2366 factory.go:221] Registration of the systemd container factory successfully May 13 12:52:57.176593 kubelet[2366]: I0513 12:52:57.176561 2366 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:52:57.177445 kubelet[2366]: I0513 12:52:57.177426 2366 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:52:57.177817 kubelet[2366]: I0513 12:52:57.177797 2366 reconciler.go:26] "Reconciler: start to sync state" May 13 12:52:57.179107 kubelet[2366]: I0513 12:52:57.179086 2366 factory.go:221] Registration of the containerd container factory successfully May 13 12:52:57.200492 kubelet[2366]: E0513 12:52:57.200444 2366 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:52:57.230239 kubelet[2366]: I0513 12:52:57.229893 2366 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 12:52:57.230239 kubelet[2366]: I0513 12:52:57.229910 2366 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 12:52:57.230239 kubelet[2366]: I0513 12:52:57.229926 2366 state_mem.go:36] "Initialized new in-memory state store" May 13 12:52:57.232320 kubelet[2366]: I0513 12:52:57.231379 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:52:57.237714 kubelet[2366]: I0513 12:52:57.237663 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:52:57.238116 kubelet[2366]: I0513 12:52:57.237975 2366 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 12:52:57.238458 kubelet[2366]: I0513 12:52:57.238035 2366 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 12:52:57.238458 kubelet[2366]: I0513 12:52:57.238378 2366 kubelet.go:2388] "Starting kubelet main sync loop" May 13 12:52:57.238844 kubelet[2366]: E0513 12:52:57.238792 2366 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:52:57.244493 kubelet[2366]: I0513 12:52:57.243694 2366 policy_none.go:49] "None policy: Start" May 13 12:52:57.244493 kubelet[2366]: I0513 12:52:57.243718 2366 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 12:52:57.244493 kubelet[2366]: I0513 12:52:57.243729 2366 state_mem.go:35] "Initializing new in-memory state store" May 13 12:52:57.254722 kubelet[2366]: W0513 12:52:57.254671 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused May 13 12:52:57.257742 kubelet[2366]: E0513 12:52:57.257710 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.211:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.211:6443: connect: connection refused" logger="UnhandledError" May 13 12:52:57.271680 kubelet[2366]: E0513 12:52:57.271657 2366 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" May 13 12:52:57.278423 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 12:52:57.320380 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 12:52:57.329464 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 12:52:57.337243 kubelet[2366]: I0513 12:52:57.337217 2366 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:52:57.337638 kubelet[2366]: I0513 12:52:57.337607 2366 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:52:57.337678 kubelet[2366]: I0513 12:52:57.337640 2366 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:52:57.337907 kubelet[2366]: I0513 12:52:57.337890 2366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:52:57.341018 kubelet[2366]: E0513 12:52:57.341000 2366 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 12:52:57.341204 kubelet[2366]: E0513 12:52:57.341157 2366 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" May 13 12:52:57.352736 systemd[1]: Created slice kubepods-burstable-podb2fae5123a27bb66af6abaab11d50480.slice - libcontainer container kubepods-burstable-podb2fae5123a27bb66af6abaab11d50480.slice. May 13 12:52:57.371150 kubelet[2366]: E0513 12:52:57.371107 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-100-4cb33ef211.novalocal?timeout=10s\": dial tcp 172.24.4.211:6443: connect: connection refused" interval="400ms" May 13 12:52:57.373781 kubelet[2366]: E0513 12:52:57.373747 2366 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.377545 systemd[1]: Created slice kubepods-burstable-pod8d0135e9287a0c402dd2c60153a5466a.slice - libcontainer container kubepods-burstable-pod8d0135e9287a0c402dd2c60153a5466a.slice. May 13 12:52:57.379259 kubelet[2366]: I0513 12:52:57.378883 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2fae5123a27bb66af6abaab11d50480-ca-certs\") pod \"kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"b2fae5123a27bb66af6abaab11d50480\") " pod="kube-system/kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.379259 kubelet[2366]: I0513 12:52:57.378923 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2fae5123a27bb66af6abaab11d50480-k8s-certs\") pod \"kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"b2fae5123a27bb66af6abaab11d50480\") " pod="kube-system/kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.379259 kubelet[2366]: I0513 12:52:57.378965 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d0135e9287a0c402dd2c60153a5466a-ca-certs\") pod \"kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"8d0135e9287a0c402dd2c60153a5466a\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.379259 kubelet[2366]: I0513 12:52:57.379000 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d0135e9287a0c402dd2c60153a5466a-k8s-certs\") pod \"kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"8d0135e9287a0c402dd2c60153a5466a\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.379259 kubelet[2366]: I0513 12:52:57.379021 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a40f1c2480df765fd01e8b741606712-kubeconfig\") pod \"kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"4a40f1c2480df765fd01e8b741606712\") " pod="kube-system/kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.379411 kubelet[2366]: I0513 12:52:57.379061 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2fae5123a27bb66af6abaab11d50480-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"b2fae5123a27bb66af6abaab11d50480\") " pod="kube-system/kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.379411 kubelet[2366]: I0513 12:52:57.379088 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d0135e9287a0c402dd2c60153a5466a-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"8d0135e9287a0c402dd2c60153a5466a\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.379411 kubelet[2366]: I0513 12:52:57.379107 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d0135e9287a0c402dd2c60153a5466a-kubeconfig\") pod \"kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"8d0135e9287a0c402dd2c60153a5466a\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.379411 kubelet[2366]: I0513 12:52:57.379309 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d0135e9287a0c402dd2c60153a5466a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"8d0135e9287a0c402dd2c60153a5466a\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.380369 kubelet[2366]: E0513 12:52:57.380346 2366 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.383159 systemd[1]: Created slice kubepods-burstable-pod4a40f1c2480df765fd01e8b741606712.slice - libcontainer container kubepods-burstable-pod4a40f1c2480df765fd01e8b741606712.slice. May 13 12:52:57.384992 kubelet[2366]: E0513 12:52:57.384963 2366 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.441952 kubelet[2366]: I0513 12:52:57.441827 2366 kubelet_node_status.go:76] "Attempting to register node" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.442669 kubelet[2366]: E0513 12:52:57.442576 2366 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.211:6443/api/v1/nodes\": dial tcp 172.24.4.211:6443: connect: connection refused" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.646470 kubelet[2366]: I0513 12:52:57.646417 2366 kubelet_node_status.go:76] "Attempting to register node" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.647112 kubelet[2366]: E0513 12:52:57.646983 2366 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.211:6443/api/v1/nodes\": dial tcp 172.24.4.211:6443: connect: connection refused" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:57.675255 containerd[1551]: time="2025-05-13T12:52:57.675154450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal,Uid:b2fae5123a27bb66af6abaab11d50480,Namespace:kube-system,Attempt:0,}" May 13 12:52:57.682502 containerd[1551]: time="2025-05-13T12:52:57.682375263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal,Uid:8d0135e9287a0c402dd2c60153a5466a,Namespace:kube-system,Attempt:0,}" May 13 12:52:57.687968 containerd[1551]: time="2025-05-13T12:52:57.687908700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal,Uid:4a40f1c2480df765fd01e8b741606712,Namespace:kube-system,Attempt:0,}" May 13 12:52:57.772140 kubelet[2366]: E0513 12:52:57.771926 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.211:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-100-4cb33ef211.novalocal?timeout=10s\": dial tcp 172.24.4.211:6443: connect: connection refused" interval="800ms" May 13 12:52:57.790218 containerd[1551]: time="2025-05-13T12:52:57.788464697Z" level=info msg="connecting to shim 98b5dc6f4ef9e42d0f8806eb6e52abd0ae250b6d091f8760aabc936f2e1b4ac8" address="unix:///run/containerd/s/a7799db625287ac4a2e9068ea923cd71a829cb42ee1c5b60d362e4486d7bcdc5" namespace=k8s.io protocol=ttrpc version=3 May 13 12:52:57.793372 containerd[1551]: time="2025-05-13T12:52:57.793243290Z" level=info msg="connecting to shim 0f83b75909ecc3354953212c8cf075ec0186791158e31bcde62f65f39b29b691" address="unix:///run/containerd/s/ca29ec5ea521e9ece0f4cb8f05dc314bf7b98e876867df2606455e8bf342fc0e" namespace=k8s.io protocol=ttrpc version=3 May 13 12:52:57.815504 containerd[1551]: time="2025-05-13T12:52:57.815003524Z" level=info msg="connecting to shim 4a847720c0ed102da04ad6eb911b8c0733e5c2d61dcaa43f44f37e36112fd57d" address="unix:///run/containerd/s/ca44b6af48d783d375fabcfd885065ca064c40e9462f171f1bfeff1d12d2f26f" namespace=k8s.io protocol=ttrpc version=3 May 13 12:52:57.842694 systemd[1]: Started cri-containerd-0f83b75909ecc3354953212c8cf075ec0186791158e31bcde62f65f39b29b691.scope - libcontainer container 0f83b75909ecc3354953212c8cf075ec0186791158e31bcde62f65f39b29b691. May 13 12:52:57.849567 systemd[1]: Started cri-containerd-4a847720c0ed102da04ad6eb911b8c0733e5c2d61dcaa43f44f37e36112fd57d.scope - libcontainer container 4a847720c0ed102da04ad6eb911b8c0733e5c2d61dcaa43f44f37e36112fd57d. May 13 12:52:57.851838 systemd[1]: Started cri-containerd-98b5dc6f4ef9e42d0f8806eb6e52abd0ae250b6d091f8760aabc936f2e1b4ac8.scope - libcontainer container 98b5dc6f4ef9e42d0f8806eb6e52abd0ae250b6d091f8760aabc936f2e1b4ac8. May 13 12:52:57.906591 containerd[1551]: time="2025-05-13T12:52:57.905993301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal,Uid:b2fae5123a27bb66af6abaab11d50480,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f83b75909ecc3354953212c8cf075ec0186791158e31bcde62f65f39b29b691\"" May 13 12:52:57.913017 containerd[1551]: time="2025-05-13T12:52:57.912763445Z" level=info msg="CreateContainer within sandbox \"0f83b75909ecc3354953212c8cf075ec0186791158e31bcde62f65f39b29b691\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 12:52:57.930504 containerd[1551]: time="2025-05-13T12:52:57.930319413Z" level=info msg="Container 0532b6df191d44686438f964db7d3d3c9419ecb08d1d63efe028809c75886829: CDI devices from CRI Config.CDIDevices: []" May 13 12:52:57.933894 containerd[1551]: time="2025-05-13T12:52:57.933697173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal,Uid:4a40f1c2480df765fd01e8b741606712,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a847720c0ed102da04ad6eb911b8c0733e5c2d61dcaa43f44f37e36112fd57d\"" May 13 12:52:57.940803 containerd[1551]: time="2025-05-13T12:52:57.940690035Z" level=info msg="CreateContainer within sandbox \"4a847720c0ed102da04ad6eb911b8c0733e5c2d61dcaa43f44f37e36112fd57d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 12:52:57.951355 containerd[1551]: time="2025-05-13T12:52:57.951313077Z" level=info msg="Container 33ace06d6ce68c383a67f94fbc7bede07b053fbaca123a515abae16317ae1cf1: CDI devices from CRI Config.CDIDevices: []" May 13 12:52:57.958829 containerd[1551]: time="2025-05-13T12:52:57.958678709Z" level=info msg="CreateContainer within sandbox \"0f83b75909ecc3354953212c8cf075ec0186791158e31bcde62f65f39b29b691\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0532b6df191d44686438f964db7d3d3c9419ecb08d1d63efe028809c75886829\"" May 13 12:52:57.959954 containerd[1551]: time="2025-05-13T12:52:57.959600757Z" level=info msg="StartContainer for \"0532b6df191d44686438f964db7d3d3c9419ecb08d1d63efe028809c75886829\"" May 13 12:52:57.961099 containerd[1551]: time="2025-05-13T12:52:57.961069976Z" level=info msg="connecting to shim 0532b6df191d44686438f964db7d3d3c9419ecb08d1d63efe028809c75886829" address="unix:///run/containerd/s/ca29ec5ea521e9ece0f4cb8f05dc314bf7b98e876867df2606455e8bf342fc0e" protocol=ttrpc version=3 May 13 12:52:57.969008 containerd[1551]: time="2025-05-13T12:52:57.968896101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal,Uid:8d0135e9287a0c402dd2c60153a5466a,Namespace:kube-system,Attempt:0,} returns sandbox id \"98b5dc6f4ef9e42d0f8806eb6e52abd0ae250b6d091f8760aabc936f2e1b4ac8\"" May 13 12:52:57.971613 containerd[1551]: time="2025-05-13T12:52:57.971593255Z" level=info msg="CreateContainer within sandbox \"98b5dc6f4ef9e42d0f8806eb6e52abd0ae250b6d091f8760aabc936f2e1b4ac8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 12:52:57.972507 containerd[1551]: time="2025-05-13T12:52:57.972268738Z" level=info msg="CreateContainer within sandbox \"4a847720c0ed102da04ad6eb911b8c0733e5c2d61dcaa43f44f37e36112fd57d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"33ace06d6ce68c383a67f94fbc7bede07b053fbaca123a515abae16317ae1cf1\"" May 13 12:52:57.972714 containerd[1551]: time="2025-05-13T12:52:57.972693955Z" level=info msg="StartContainer for \"33ace06d6ce68c383a67f94fbc7bede07b053fbaca123a515abae16317ae1cf1\"" May 13 12:52:57.975737 containerd[1551]: time="2025-05-13T12:52:57.975713890Z" level=info msg="connecting to shim 33ace06d6ce68c383a67f94fbc7bede07b053fbaca123a515abae16317ae1cf1" address="unix:///run/containerd/s/ca44b6af48d783d375fabcfd885065ca064c40e9462f171f1bfeff1d12d2f26f" protocol=ttrpc version=3 May 13 12:52:57.980997 kubelet[2366]: W0513 12:52:57.980924 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.211:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.211:6443: connect: connection refused May 13 12:52:57.980997 kubelet[2366]: E0513 12:52:57.980997 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.211:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.211:6443: connect: connection refused" logger="UnhandledError" May 13 12:52:57.992685 systemd[1]: Started cri-containerd-0532b6df191d44686438f964db7d3d3c9419ecb08d1d63efe028809c75886829.scope - libcontainer container 0532b6df191d44686438f964db7d3d3c9419ecb08d1d63efe028809c75886829. May 13 12:52:57.995900 containerd[1551]: time="2025-05-13T12:52:57.995863172Z" level=info msg="Container 3d71b0821609dc7c31efb18842411da5120f94d049d71f37da9e9206c55f1239: CDI devices from CRI Config.CDIDevices: []" May 13 12:52:57.999770 systemd[1]: Started cri-containerd-33ace06d6ce68c383a67f94fbc7bede07b053fbaca123a515abae16317ae1cf1.scope - libcontainer container 33ace06d6ce68c383a67f94fbc7bede07b053fbaca123a515abae16317ae1cf1. May 13 12:52:58.014490 containerd[1551]: time="2025-05-13T12:52:58.014358793Z" level=info msg="CreateContainer within sandbox \"98b5dc6f4ef9e42d0f8806eb6e52abd0ae250b6d091f8760aabc936f2e1b4ac8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3d71b0821609dc7c31efb18842411da5120f94d049d71f37da9e9206c55f1239\"" May 13 12:52:58.017975 containerd[1551]: time="2025-05-13T12:52:58.017823924Z" level=info msg="StartContainer for \"3d71b0821609dc7c31efb18842411da5120f94d049d71f37da9e9206c55f1239\"" May 13 12:52:58.022218 containerd[1551]: time="2025-05-13T12:52:58.021952598Z" level=info msg="connecting to shim 3d71b0821609dc7c31efb18842411da5120f94d049d71f37da9e9206c55f1239" address="unix:///run/containerd/s/a7799db625287ac4a2e9068ea923cd71a829cb42ee1c5b60d362e4486d7bcdc5" protocol=ttrpc version=3 May 13 12:52:58.051032 kubelet[2366]: I0513 12:52:58.050952 2366 kubelet_node_status.go:76] "Attempting to register node" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:58.053780 kubelet[2366]: E0513 12:52:58.052712 2366 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.211:6443/api/v1/nodes\": dial tcp 172.24.4.211:6443: connect: connection refused" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:58.059726 systemd[1]: Started cri-containerd-3d71b0821609dc7c31efb18842411da5120f94d049d71f37da9e9206c55f1239.scope - libcontainer container 3d71b0821609dc7c31efb18842411da5120f94d049d71f37da9e9206c55f1239. May 13 12:52:58.090219 containerd[1551]: time="2025-05-13T12:52:58.090182200Z" level=info msg="StartContainer for \"33ace06d6ce68c383a67f94fbc7bede07b053fbaca123a515abae16317ae1cf1\" returns successfully" May 13 12:52:58.106644 containerd[1551]: time="2025-05-13T12:52:58.106596792Z" level=info msg="StartContainer for \"0532b6df191d44686438f964db7d3d3c9419ecb08d1d63efe028809c75886829\" returns successfully" May 13 12:52:58.140792 containerd[1551]: time="2025-05-13T12:52:58.140744115Z" level=info msg="StartContainer for \"3d71b0821609dc7c31efb18842411da5120f94d049d71f37da9e9206c55f1239\" returns successfully" May 13 12:52:58.263370 kubelet[2366]: E0513 12:52:58.262783 2366 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:58.267207 kubelet[2366]: E0513 12:52:58.267172 2366 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:58.270358 kubelet[2366]: E0513 12:52:58.270243 2366 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:58.855778 kubelet[2366]: I0513 12:52:58.855619 2366 kubelet_node_status.go:76] "Attempting to register node" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:59.271734 kubelet[2366]: E0513 12:52:59.270409 2366 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:59.271734 kubelet[2366]: E0513 12:52:59.270701 2366 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:52:59.271734 kubelet[2366]: E0513 12:52:59.270920 2366 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:00.863764 kubelet[2366]: E0513 12:53:00.863726 2366 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-9999-9-100-4cb33ef211.novalocal\" not found" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:00.889666 kubelet[2366]: I0513 12:53:00.889608 2366 kubelet_node_status.go:79] "Successfully registered node" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:00.970767 kubelet[2366]: I0513 12:53:00.970731 2366 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:00.982504 kubelet[2366]: E0513 12:53:00.982337 2366 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:00.982504 kubelet[2366]: I0513 12:53:00.982375 2366 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:00.985499 kubelet[2366]: E0513 12:53:00.985321 2366 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:00.985499 kubelet[2366]: I0513 12:53:00.985349 2366 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:00.987367 kubelet[2366]: E0513 12:53:00.987344 2366 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:01.147058 kubelet[2366]: I0513 12:53:01.146785 2366 apiserver.go:52] "Watching apiserver" May 13 12:53:01.178178 kubelet[2366]: I0513 12:53:01.178061 2366 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:53:03.536837 systemd[1]: Reload requested from client PID 2649 ('systemctl') (unit session-9.scope)... May 13 12:53:03.537136 systemd[1]: Reloading... May 13 12:53:03.664518 zram_generator::config[2696]: No configuration found. May 13 12:53:03.683329 kubelet[2366]: I0513 12:53:03.682211 2366 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:03.691409 kubelet[2366]: W0513 12:53:03.691352 2366 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 12:53:03.791582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:53:03.957388 systemd[1]: Reloading finished in 419 ms. May 13 12:53:03.984847 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:53:03.997649 systemd[1]: kubelet.service: Deactivated successfully. May 13 12:53:03.997898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:53:03.997951 systemd[1]: kubelet.service: Consumed 1.097s CPU time, 125.2M memory peak. May 13 12:53:04.001782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:53:04.152528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:53:04.171194 (kubelet)[2758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:53:04.365649 kubelet[2758]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:53:04.366329 kubelet[2758]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 12:53:04.366329 kubelet[2758]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:53:04.366329 kubelet[2758]: I0513 12:53:04.366054 2758 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:53:04.375320 kubelet[2758]: I0513 12:53:04.375299 2758 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 12:53:04.375421 kubelet[2758]: I0513 12:53:04.375410 2758 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:53:04.376819 kubelet[2758]: I0513 12:53:04.376802 2758 server.go:954] "Client rotation is on, will bootstrap in background" May 13 12:53:04.381829 kubelet[2758]: I0513 12:53:04.381803 2758 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 12:53:04.384401 kubelet[2758]: I0513 12:53:04.384383 2758 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:53:04.391606 kubelet[2758]: I0513 12:53:04.391526 2758 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:53:04.395078 kubelet[2758]: I0513 12:53:04.395011 2758 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:53:04.395341 kubelet[2758]: I0513 12:53:04.395313 2758 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:53:04.395621 kubelet[2758]: I0513 12:53:04.395400 2758 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-9-100-4cb33ef211.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:53:04.395773 kubelet[2758]: I0513 12:53:04.395752 2758 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:53:04.396164 kubelet[2758]: I0513 12:53:04.395827 2758 container_manager_linux.go:304] "Creating device plugin manager" May 13 12:53:04.396164 kubelet[2758]: I0513 12:53:04.395869 2758 state_mem.go:36] "Initialized new in-memory state store" May 13 12:53:04.396164 kubelet[2758]: I0513 12:53:04.395999 2758 kubelet.go:446] "Attempting to sync node with API server" May 13 12:53:04.396164 kubelet[2758]: I0513 12:53:04.396012 2758 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:53:04.396164 kubelet[2758]: I0513 12:53:04.396035 2758 kubelet.go:352] "Adding apiserver pod source" May 13 12:53:04.396164 kubelet[2758]: I0513 12:53:04.396049 2758 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:53:04.403137 kubelet[2758]: I0513 12:53:04.403056 2758 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:53:04.411554 kubelet[2758]: I0513 12:53:04.409403 2758 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:53:04.413029 kubelet[2758]: I0513 12:53:04.412992 2758 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 12:53:04.414949 kubelet[2758]: I0513 12:53:04.413083 2758 server.go:1287] "Started kubelet" May 13 12:53:04.421509 kubelet[2758]: I0513 12:53:04.421452 2758 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:53:04.430789 kubelet[2758]: I0513 12:53:04.430754 2758 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:53:04.434336 kubelet[2758]: I0513 12:53:04.434307 2758 server.go:490] "Adding debug handlers to kubelet server" May 13 12:53:04.437092 kubelet[2758]: I0513 12:53:04.431619 2758 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:53:04.438009 kubelet[2758]: I0513 12:53:04.430852 2758 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:53:04.438330 kubelet[2758]: I0513 12:53:04.438312 2758 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:53:04.438434 kubelet[2758]: E0513 12:53:04.432670 2758 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-9999-9-100-4cb33ef211.novalocal\" not found" May 13 12:53:04.438529 kubelet[2758]: I0513 12:53:04.432327 2758 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 12:53:04.438916 kubelet[2758]: I0513 12:53:04.432339 2758 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 12:53:04.439068 kubelet[2758]: I0513 12:53:04.439057 2758 reconciler.go:26] "Reconciler: start to sync state" May 13 12:53:04.444652 kubelet[2758]: I0513 12:53:04.444607 2758 factory.go:221] Registration of the systemd container factory successfully May 13 12:53:04.444759 kubelet[2758]: I0513 12:53:04.444723 2758 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:53:04.445092 kubelet[2758]: I0513 12:53:04.445052 2758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:53:04.446127 kubelet[2758]: I0513 12:53:04.446113 2758 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:53:04.446205 kubelet[2758]: I0513 12:53:04.446196 2758 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 12:53:04.446278 kubelet[2758]: I0513 12:53:04.446269 2758 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 12:53:04.446332 kubelet[2758]: I0513 12:53:04.446324 2758 kubelet.go:2388] "Starting kubelet main sync loop" May 13 12:53:04.446426 kubelet[2758]: E0513 12:53:04.446409 2758 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:53:04.449427 kubelet[2758]: I0513 12:53:04.449399 2758 factory.go:221] Registration of the containerd container factory successfully May 13 12:53:04.453784 kubelet[2758]: E0513 12:53:04.453758 2758 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:53:04.531873 kubelet[2758]: I0513 12:53:04.531847 2758 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 12:53:04.532551 kubelet[2758]: I0513 12:53:04.532021 2758 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 12:53:04.532551 kubelet[2758]: I0513 12:53:04.532052 2758 state_mem.go:36] "Initialized new in-memory state store" May 13 12:53:04.532551 kubelet[2758]: I0513 12:53:04.532220 2758 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 12:53:04.532551 kubelet[2758]: I0513 12:53:04.532232 2758 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 12:53:04.532551 kubelet[2758]: I0513 12:53:04.532251 2758 policy_none.go:49] "None policy: Start" May 13 12:53:04.532551 kubelet[2758]: I0513 12:53:04.532260 2758 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 12:53:04.532551 kubelet[2758]: I0513 12:53:04.532270 2758 state_mem.go:35] "Initializing new in-memory state store" May 13 12:53:04.532551 kubelet[2758]: I0513 12:53:04.532377 2758 state_mem.go:75] "Updated machine memory state" May 13 12:53:04.540686 kubelet[2758]: I0513 12:53:04.540415 2758 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:53:04.543035 kubelet[2758]: I0513 12:53:04.543008 2758 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:53:04.543118 kubelet[2758]: I0513 12:53:04.543036 2758 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:53:04.545540 kubelet[2758]: I0513 12:53:04.545518 2758 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:53:04.548176 kubelet[2758]: E0513 12:53:04.547878 2758 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 12:53:04.550841 kubelet[2758]: I0513 12:53:04.549422 2758 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.551649 kubelet[2758]: I0513 12:53:04.551623 2758 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.555892 kubelet[2758]: I0513 12:53:04.555846 2758 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.570757 kubelet[2758]: W0513 12:53:04.570641 2758 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 12:53:04.573098 kubelet[2758]: W0513 12:53:04.572018 2758 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 12:53:04.573318 kubelet[2758]: E0513 12:53:04.573110 2758 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.580869 kubelet[2758]: W0513 12:53:04.580708 2758 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 12:53:04.641318 kubelet[2758]: I0513 12:53:04.641269 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2fae5123a27bb66af6abaab11d50480-ca-certs\") pod \"kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"b2fae5123a27bb66af6abaab11d50480\") " pod="kube-system/kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.641680 kubelet[2758]: I0513 12:53:04.641576 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d0135e9287a0c402dd2c60153a5466a-kubeconfig\") pod \"kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"8d0135e9287a0c402dd2c60153a5466a\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.641851 kubelet[2758]: I0513 12:53:04.641741 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d0135e9287a0c402dd2c60153a5466a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"8d0135e9287a0c402dd2c60153a5466a\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.641851 kubelet[2758]: I0513 12:53:04.641769 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d0135e9287a0c402dd2c60153a5466a-k8s-certs\") pod \"kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"8d0135e9287a0c402dd2c60153a5466a\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.642040 kubelet[2758]: I0513 12:53:04.641928 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a40f1c2480df765fd01e8b741606712-kubeconfig\") pod \"kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"4a40f1c2480df765fd01e8b741606712\") " pod="kube-system/kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.642040 kubelet[2758]: I0513 12:53:04.641952 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2fae5123a27bb66af6abaab11d50480-k8s-certs\") pod \"kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"b2fae5123a27bb66af6abaab11d50480\") " pod="kube-system/kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.642299 kubelet[2758]: I0513 12:53:04.642280 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2fae5123a27bb66af6abaab11d50480-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"b2fae5123a27bb66af6abaab11d50480\") " pod="kube-system/kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.642500 kubelet[2758]: I0513 12:53:04.642442 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d0135e9287a0c402dd2c60153a5466a-ca-certs\") pod \"kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"8d0135e9287a0c402dd2c60153a5466a\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.642619 kubelet[2758]: I0513 12:53:04.642472 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8d0135e9287a0c402dd2c60153a5466a-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal\" (UID: \"8d0135e9287a0c402dd2c60153a5466a\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.665088 kubelet[2758]: I0513 12:53:04.664838 2758 kubelet_node_status.go:76] "Attempting to register node" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.676457 kubelet[2758]: I0513 12:53:04.676143 2758 kubelet_node_status.go:125] "Node was previously registered" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:04.676457 kubelet[2758]: I0513 12:53:04.676216 2758 kubelet_node_status.go:79] "Successfully registered node" node="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:05.399329 kubelet[2758]: I0513 12:53:05.399209 2758 apiserver.go:52] "Watching apiserver" May 13 12:53:05.439994 kubelet[2758]: I0513 12:53:05.439922 2758 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 12:53:05.500079 kubelet[2758]: I0513 12:53:05.499772 2758 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:05.509931 kubelet[2758]: W0513 12:53:05.509892 2758 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 13 12:53:05.510043 kubelet[2758]: E0513 12:53:05.509960 2758 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:05.549979 kubelet[2758]: I0513 12:53:05.549445 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-9999-9-100-4cb33ef211.novalocal" podStartSLOduration=1.549423246 podStartE2EDuration="1.549423246s" podCreationTimestamp="2025-05-13 12:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:53:05.533814798 +0000 UTC m=+1.356962998" watchObservedRunningTime="2025-05-13 12:53:05.549423246 +0000 UTC m=+1.372571456" May 13 12:53:05.563499 kubelet[2758]: I0513 12:53:05.563420 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-9999-9-100-4cb33ef211.novalocal" podStartSLOduration=2.563384333 podStartE2EDuration="2.563384333s" podCreationTimestamp="2025-05-13 12:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:53:05.550589196 +0000 UTC m=+1.373737406" watchObservedRunningTime="2025-05-13 12:53:05.563384333 +0000 UTC m=+1.386532533" May 13 12:53:05.563960 kubelet[2758]: I0513 12:53:05.563733 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-9999-9-100-4cb33ef211.novalocal" podStartSLOduration=1.563724737 podStartE2EDuration="1.563724737s" podCreationTimestamp="2025-05-13 12:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:53:05.561746273 +0000 UTC m=+1.384894483" watchObservedRunningTime="2025-05-13 12:53:05.563724737 +0000 UTC m=+1.386872947" May 13 12:53:08.264023 kubelet[2758]: I0513 12:53:08.263685 2758 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 12:53:08.265030 containerd[1551]: time="2025-05-13T12:53:08.263954399Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 12:53:08.265785 kubelet[2758]: I0513 12:53:08.265121 2758 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 12:53:08.847970 systemd[1]: Created slice kubepods-besteffort-podeffcb564_4af4_4b41_9271_21bf9c33b07e.slice - libcontainer container kubepods-besteffort-podeffcb564_4af4_4b41_9271_21bf9c33b07e.slice. May 13 12:53:08.875150 kubelet[2758]: I0513 12:53:08.875098 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/effcb564-4af4-4b41-9271-21bf9c33b07e-xtables-lock\") pod \"kube-proxy-qw59j\" (UID: \"effcb564-4af4-4b41-9271-21bf9c33b07e\") " pod="kube-system/kube-proxy-qw59j" May 13 12:53:08.875150 kubelet[2758]: I0513 12:53:08.875141 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/effcb564-4af4-4b41-9271-21bf9c33b07e-kube-proxy\") pod \"kube-proxy-qw59j\" (UID: \"effcb564-4af4-4b41-9271-21bf9c33b07e\") " pod="kube-system/kube-proxy-qw59j" May 13 12:53:08.875150 kubelet[2758]: I0513 12:53:08.875160 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/effcb564-4af4-4b41-9271-21bf9c33b07e-lib-modules\") pod \"kube-proxy-qw59j\" (UID: \"effcb564-4af4-4b41-9271-21bf9c33b07e\") " pod="kube-system/kube-proxy-qw59j" May 13 12:53:08.875331 kubelet[2758]: I0513 12:53:08.875182 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkxqq\" (UniqueName: \"kubernetes.io/projected/effcb564-4af4-4b41-9271-21bf9c33b07e-kube-api-access-kkxqq\") pod \"kube-proxy-qw59j\" (UID: \"effcb564-4af4-4b41-9271-21bf9c33b07e\") " pod="kube-system/kube-proxy-qw59j" May 13 12:53:08.985902 kubelet[2758]: E0513 12:53:08.985822 2758 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 13 12:53:08.985902 kubelet[2758]: E0513 12:53:08.985855 2758 projected.go:194] Error preparing data for projected volume kube-api-access-kkxqq for pod kube-system/kube-proxy-qw59j: configmap "kube-root-ca.crt" not found May 13 12:53:08.986220 kubelet[2758]: E0513 12:53:08.986112 2758 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/effcb564-4af4-4b41-9271-21bf9c33b07e-kube-api-access-kkxqq podName:effcb564-4af4-4b41-9271-21bf9c33b07e nodeName:}" failed. No retries permitted until 2025-05-13 12:53:09.486087115 +0000 UTC m=+5.309235315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kkxqq" (UniqueName: "kubernetes.io/projected/effcb564-4af4-4b41-9271-21bf9c33b07e-kube-api-access-kkxqq") pod "kube-proxy-qw59j" (UID: "effcb564-4af4-4b41-9271-21bf9c33b07e") : configmap "kube-root-ca.crt" not found May 13 12:53:09.178921 kubelet[2758]: I0513 12:53:09.178250 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-848h9\" (UniqueName: \"kubernetes.io/projected/fa829887-5ebe-49a4-91d3-b77a51883f5b-kube-api-access-848h9\") pod \"tigera-operator-789496d6f5-sgf5d\" (UID: \"fa829887-5ebe-49a4-91d3-b77a51883f5b\") " pod="tigera-operator/tigera-operator-789496d6f5-sgf5d" May 13 12:53:09.178921 kubelet[2758]: I0513 12:53:09.178388 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa829887-5ebe-49a4-91d3-b77a51883f5b-var-lib-calico\") pod \"tigera-operator-789496d6f5-sgf5d\" (UID: \"fa829887-5ebe-49a4-91d3-b77a51883f5b\") " pod="tigera-operator/tigera-operator-789496d6f5-sgf5d" May 13 12:53:09.182693 systemd[1]: Created slice kubepods-besteffort-podfa829887_5ebe_49a4_91d3_b77a51883f5b.slice - libcontainer container kubepods-besteffort-podfa829887_5ebe_49a4_91d3_b77a51883f5b.slice. May 13 12:53:09.489438 containerd[1551]: time="2025-05-13T12:53:09.489183191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-sgf5d,Uid:fa829887-5ebe-49a4-91d3-b77a51883f5b,Namespace:tigera-operator,Attempt:0,}" May 13 12:53:09.548917 containerd[1551]: time="2025-05-13T12:53:09.548649906Z" level=info msg="connecting to shim 3cacb711590d28d2a0af61e073cd364a387e203c3447602e6e72a95dd44e522d" address="unix:///run/containerd/s/30149be606c3c4622e2c8419c5dd5dc4a73ab0d3479b6704dd8b5d0c5a0e1704" namespace=k8s.io protocol=ttrpc version=3 May 13 12:53:09.588628 systemd[1]: Started cri-containerd-3cacb711590d28d2a0af61e073cd364a387e203c3447602e6e72a95dd44e522d.scope - libcontainer container 3cacb711590d28d2a0af61e073cd364a387e203c3447602e6e72a95dd44e522d. May 13 12:53:09.641274 containerd[1551]: time="2025-05-13T12:53:09.641237569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-sgf5d,Uid:fa829887-5ebe-49a4-91d3-b77a51883f5b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3cacb711590d28d2a0af61e073cd364a387e203c3447602e6e72a95dd44e522d\"" May 13 12:53:09.643661 containerd[1551]: time="2025-05-13T12:53:09.643633336Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 13 12:53:09.763649 containerd[1551]: time="2025-05-13T12:53:09.763024396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qw59j,Uid:effcb564-4af4-4b41-9271-21bf9c33b07e,Namespace:kube-system,Attempt:0,}" May 13 12:53:09.816636 containerd[1551]: time="2025-05-13T12:53:09.816475286Z" level=info msg="connecting to shim 8d2877cbdefa928260925f20aa67f9df3580a81be0421a5312c6a3b3a89fd091" address="unix:///run/containerd/s/707b44422fdf5e74b5016c6a81d7c2446ede1635b4d11b7b1766bcf6aa3fd202" namespace=k8s.io protocol=ttrpc version=3 May 13 12:53:09.871744 systemd[1]: Started cri-containerd-8d2877cbdefa928260925f20aa67f9df3580a81be0421a5312c6a3b3a89fd091.scope - libcontainer container 8d2877cbdefa928260925f20aa67f9df3580a81be0421a5312c6a3b3a89fd091. May 13 12:53:09.902203 containerd[1551]: time="2025-05-13T12:53:09.902113232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qw59j,Uid:effcb564-4af4-4b41-9271-21bf9c33b07e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d2877cbdefa928260925f20aa67f9df3580a81be0421a5312c6a3b3a89fd091\"" May 13 12:53:09.906567 containerd[1551]: time="2025-05-13T12:53:09.906533331Z" level=info msg="CreateContainer within sandbox \"8d2877cbdefa928260925f20aa67f9df3580a81be0421a5312c6a3b3a89fd091\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 12:53:09.922272 containerd[1551]: time="2025-05-13T12:53:09.922223309Z" level=info msg="Container 4f374db88f1df3a56805773caca0bdddd359de9abf0bb76f45d909b44ea662ce: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:09.933586 containerd[1551]: time="2025-05-13T12:53:09.933543209Z" level=info msg="CreateContainer within sandbox \"8d2877cbdefa928260925f20aa67f9df3580a81be0421a5312c6a3b3a89fd091\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4f374db88f1df3a56805773caca0bdddd359de9abf0bb76f45d909b44ea662ce\"" May 13 12:53:09.935450 containerd[1551]: time="2025-05-13T12:53:09.934287999Z" level=info msg="StartContainer for \"4f374db88f1df3a56805773caca0bdddd359de9abf0bb76f45d909b44ea662ce\"" May 13 12:53:09.937233 containerd[1551]: time="2025-05-13T12:53:09.937186066Z" level=info msg="connecting to shim 4f374db88f1df3a56805773caca0bdddd359de9abf0bb76f45d909b44ea662ce" address="unix:///run/containerd/s/707b44422fdf5e74b5016c6a81d7c2446ede1635b4d11b7b1766bcf6aa3fd202" protocol=ttrpc version=3 May 13 12:53:09.958624 systemd[1]: Started cri-containerd-4f374db88f1df3a56805773caca0bdddd359de9abf0bb76f45d909b44ea662ce.scope - libcontainer container 4f374db88f1df3a56805773caca0bdddd359de9abf0bb76f45d909b44ea662ce. May 13 12:53:10.004112 containerd[1551]: time="2025-05-13T12:53:10.004024652Z" level=info msg="StartContainer for \"4f374db88f1df3a56805773caca0bdddd359de9abf0bb76f45d909b44ea662ce\" returns successfully" May 13 12:53:10.548217 kubelet[2758]: I0513 12:53:10.548077 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qw59j" podStartSLOduration=2.547924995 podStartE2EDuration="2.547924995s" podCreationTimestamp="2025-05-13 12:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:53:10.546450915 +0000 UTC m=+6.369599196" watchObservedRunningTime="2025-05-13 12:53:10.547924995 +0000 UTC m=+6.371073275" May 13 12:53:11.151369 sudo[1793]: pam_unix(sudo:session): session closed for user root May 13 12:53:11.299550 sshd[1792]: Connection closed by 172.24.4.1 port 47284 May 13 12:53:11.297706 sshd-session[1790]: pam_unix(sshd:session): session closed for user core May 13 12:53:11.309230 systemd-logind[1524]: Session 9 logged out. Waiting for processes to exit. May 13 12:53:11.310384 systemd[1]: sshd@6-172.24.4.211:22-172.24.4.1:47284.service: Deactivated successfully. May 13 12:53:11.318017 systemd[1]: session-9.scope: Deactivated successfully. May 13 12:53:11.318965 systemd[1]: session-9.scope: Consumed 6.753s CPU time, 239.2M memory peak. May 13 12:53:11.329101 systemd-logind[1524]: Removed session 9. May 13 12:53:11.376367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4096154779.mount: Deactivated successfully. May 13 12:53:12.726876 containerd[1551]: time="2025-05-13T12:53:12.726777272Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:12.728055 containerd[1551]: time="2025-05-13T12:53:12.728005640Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 13 12:53:12.729826 containerd[1551]: time="2025-05-13T12:53:12.729766045Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:12.732252 containerd[1551]: time="2025-05-13T12:53:12.732172493Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:12.732917 containerd[1551]: time="2025-05-13T12:53:12.732875649Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 3.089013432s" May 13 12:53:12.732970 containerd[1551]: time="2025-05-13T12:53:12.732916426Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 13 12:53:12.736635 containerd[1551]: time="2025-05-13T12:53:12.736457120Z" level=info msg="CreateContainer within sandbox \"3cacb711590d28d2a0af61e073cd364a387e203c3447602e6e72a95dd44e522d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 13 12:53:12.748974 containerd[1551]: time="2025-05-13T12:53:12.748931944Z" level=info msg="Container 04ae3f302b8a4016ac542254ccb408878d6a4b23a984e3d7adcf793419de7636: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:12.752878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173889750.mount: Deactivated successfully. May 13 12:53:12.759223 containerd[1551]: time="2025-05-13T12:53:12.759186322Z" level=info msg="CreateContainer within sandbox \"3cacb711590d28d2a0af61e073cd364a387e203c3447602e6e72a95dd44e522d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"04ae3f302b8a4016ac542254ccb408878d6a4b23a984e3d7adcf793419de7636\"" May 13 12:53:12.760086 containerd[1551]: time="2025-05-13T12:53:12.759981265Z" level=info msg="StartContainer for \"04ae3f302b8a4016ac542254ccb408878d6a4b23a984e3d7adcf793419de7636\"" May 13 12:53:12.762345 containerd[1551]: time="2025-05-13T12:53:12.762310136Z" level=info msg="connecting to shim 04ae3f302b8a4016ac542254ccb408878d6a4b23a984e3d7adcf793419de7636" address="unix:///run/containerd/s/30149be606c3c4622e2c8419c5dd5dc4a73ab0d3479b6704dd8b5d0c5a0e1704" protocol=ttrpc version=3 May 13 12:53:12.791661 systemd[1]: Started cri-containerd-04ae3f302b8a4016ac542254ccb408878d6a4b23a984e3d7adcf793419de7636.scope - libcontainer container 04ae3f302b8a4016ac542254ccb408878d6a4b23a984e3d7adcf793419de7636. May 13 12:53:12.828209 containerd[1551]: time="2025-05-13T12:53:12.828171204Z" level=info msg="StartContainer for \"04ae3f302b8a4016ac542254ccb408878d6a4b23a984e3d7adcf793419de7636\" returns successfully" May 13 12:53:14.437050 kubelet[2758]: I0513 12:53:14.436688 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-sgf5d" podStartSLOduration=2.345723318 podStartE2EDuration="5.436641759s" podCreationTimestamp="2025-05-13 12:53:09 +0000 UTC" firstStartedPulling="2025-05-13 12:53:09.64292298 +0000 UTC m=+5.466071180" lastFinishedPulling="2025-05-13 12:53:12.733841421 +0000 UTC m=+8.556989621" observedRunningTime="2025-05-13 12:53:13.565720224 +0000 UTC m=+9.388868474" watchObservedRunningTime="2025-05-13 12:53:14.436641759 +0000 UTC m=+10.259790049" May 13 12:53:16.019868 systemd[1]: Created slice kubepods-besteffort-pod9d91234d_6fa7_4e5c_bb83_ea22dc59ccf0.slice - libcontainer container kubepods-besteffort-pod9d91234d_6fa7_4e5c_bb83_ea22dc59ccf0.slice. May 13 12:53:16.022066 kubelet[2758]: I0513 12:53:16.020626 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9d91234d-6fa7-4e5c-bb83-ea22dc59ccf0-typha-certs\") pod \"calico-typha-65647948b5-wjdvk\" (UID: \"9d91234d-6fa7-4e5c-bb83-ea22dc59ccf0\") " pod="calico-system/calico-typha-65647948b5-wjdvk" May 13 12:53:16.022066 kubelet[2758]: I0513 12:53:16.021813 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d91234d-6fa7-4e5c-bb83-ea22dc59ccf0-tigera-ca-bundle\") pod \"calico-typha-65647948b5-wjdvk\" (UID: \"9d91234d-6fa7-4e5c-bb83-ea22dc59ccf0\") " pod="calico-system/calico-typha-65647948b5-wjdvk" May 13 12:53:16.022066 kubelet[2758]: I0513 12:53:16.021866 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9jkx\" (UniqueName: \"kubernetes.io/projected/9d91234d-6fa7-4e5c-bb83-ea22dc59ccf0-kube-api-access-d9jkx\") pod \"calico-typha-65647948b5-wjdvk\" (UID: \"9d91234d-6fa7-4e5c-bb83-ea22dc59ccf0\") " pod="calico-system/calico-typha-65647948b5-wjdvk" May 13 12:53:16.110464 systemd[1]: Created slice kubepods-besteffort-podfeedf370_0fc9_486b_b036_46857f8e48d9.slice - libcontainer container kubepods-besteffort-podfeedf370_0fc9_486b_b036_46857f8e48d9.slice. May 13 12:53:16.227050 kubelet[2758]: I0513 12:53:16.226989 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/feedf370-0fc9-486b-b036-46857f8e48d9-policysync\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.228215 kubelet[2758]: I0513 12:53:16.228108 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/feedf370-0fc9-486b-b036-46857f8e48d9-cni-log-dir\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.228350 kubelet[2758]: I0513 12:53:16.228283 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/feedf370-0fc9-486b-b036-46857f8e48d9-flexvol-driver-host\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.228350 kubelet[2758]: I0513 12:53:16.228316 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/feedf370-0fc9-486b-b036-46857f8e48d9-xtables-lock\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.228511 kubelet[2758]: I0513 12:53:16.228460 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/feedf370-0fc9-486b-b036-46857f8e48d9-cni-net-dir\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.228851 kubelet[2758]: I0513 12:53:16.228609 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlbwm\" (UniqueName: \"kubernetes.io/projected/feedf370-0fc9-486b-b036-46857f8e48d9-kube-api-access-mlbwm\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.228851 kubelet[2758]: I0513 12:53:16.228638 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/feedf370-0fc9-486b-b036-46857f8e48d9-var-run-calico\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.228851 kubelet[2758]: I0513 12:53:16.228667 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/feedf370-0fc9-486b-b036-46857f8e48d9-cni-bin-dir\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.228851 kubelet[2758]: I0513 12:53:16.228688 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/feedf370-0fc9-486b-b036-46857f8e48d9-tigera-ca-bundle\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.228851 kubelet[2758]: I0513 12:53:16.228707 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/feedf370-0fc9-486b-b036-46857f8e48d9-var-lib-calico\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.229004 kubelet[2758]: I0513 12:53:16.228727 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/feedf370-0fc9-486b-b036-46857f8e48d9-lib-modules\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.229004 kubelet[2758]: I0513 12:53:16.228746 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/feedf370-0fc9-486b-b036-46857f8e48d9-node-certs\") pod \"calico-node-kbk5r\" (UID: \"feedf370-0fc9-486b-b036-46857f8e48d9\") " pod="calico-system/calico-node-kbk5r" May 13 12:53:16.236861 kubelet[2758]: E0513 12:53:16.236816 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gsp6z" podUID="49521272-f372-4acc-b36d-7e519fb5603a" May 13 12:53:16.326559 containerd[1551]: time="2025-05-13T12:53:16.326150426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65647948b5-wjdvk,Uid:9d91234d-6fa7-4e5c-bb83-ea22dc59ccf0,Namespace:calico-system,Attempt:0,}" May 13 12:53:16.329562 kubelet[2758]: I0513 12:53:16.329368 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49521272-f372-4acc-b36d-7e519fb5603a-kubelet-dir\") pod \"csi-node-driver-gsp6z\" (UID: \"49521272-f372-4acc-b36d-7e519fb5603a\") " pod="calico-system/csi-node-driver-gsp6z" May 13 12:53:16.330115 kubelet[2758]: I0513 12:53:16.330009 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bsjt\" (UniqueName: \"kubernetes.io/projected/49521272-f372-4acc-b36d-7e519fb5603a-kube-api-access-4bsjt\") pod \"csi-node-driver-gsp6z\" (UID: \"49521272-f372-4acc-b36d-7e519fb5603a\") " pod="calico-system/csi-node-driver-gsp6z" May 13 12:53:16.330551 kubelet[2758]: I0513 12:53:16.330525 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/49521272-f372-4acc-b36d-7e519fb5603a-varrun\") pod \"csi-node-driver-gsp6z\" (UID: \"49521272-f372-4acc-b36d-7e519fb5603a\") " pod="calico-system/csi-node-driver-gsp6z" May 13 12:53:16.330607 kubelet[2758]: I0513 12:53:16.330557 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/49521272-f372-4acc-b36d-7e519fb5603a-registration-dir\") pod \"csi-node-driver-gsp6z\" (UID: \"49521272-f372-4acc-b36d-7e519fb5603a\") " pod="calico-system/csi-node-driver-gsp6z" May 13 12:53:16.331582 kubelet[2758]: I0513 12:53:16.331548 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/49521272-f372-4acc-b36d-7e519fb5603a-socket-dir\") pod \"csi-node-driver-gsp6z\" (UID: \"49521272-f372-4acc-b36d-7e519fb5603a\") " pod="calico-system/csi-node-driver-gsp6z" May 13 12:53:16.337261 kubelet[2758]: E0513 12:53:16.336467 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.337261 kubelet[2758]: W0513 12:53:16.336529 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.337261 kubelet[2758]: E0513 12:53:16.336549 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.344261 kubelet[2758]: E0513 12:53:16.343650 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.344261 kubelet[2758]: W0513 12:53:16.343671 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.344261 kubelet[2758]: E0513 12:53:16.343712 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.359359 kubelet[2758]: E0513 12:53:16.359324 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.359359 kubelet[2758]: W0513 12:53:16.359347 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.359525 kubelet[2758]: E0513 12:53:16.359370 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.391785 containerd[1551]: time="2025-05-13T12:53:16.391665807Z" level=info msg="connecting to shim 8e69dd8fb7c01b1bc8e6b7ecb011ae4339577073a4c3920748862d00183f24b8" address="unix:///run/containerd/s/1505afb2c394d905e593c2babf1275ee13f619b145bf6baef2573c849498b4a2" namespace=k8s.io protocol=ttrpc version=3 May 13 12:53:16.415823 containerd[1551]: time="2025-05-13T12:53:16.415462117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kbk5r,Uid:feedf370-0fc9-486b-b036-46857f8e48d9,Namespace:calico-system,Attempt:0,}" May 13 12:53:16.431695 systemd[1]: Started cri-containerd-8e69dd8fb7c01b1bc8e6b7ecb011ae4339577073a4c3920748862d00183f24b8.scope - libcontainer container 8e69dd8fb7c01b1bc8e6b7ecb011ae4339577073a4c3920748862d00183f24b8. May 13 12:53:16.434251 kubelet[2758]: E0513 12:53:16.434221 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.434251 kubelet[2758]: W0513 12:53:16.434245 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.435566 kubelet[2758]: E0513 12:53:16.435538 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.436822 kubelet[2758]: E0513 12:53:16.436268 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.436822 kubelet[2758]: W0513 12:53:16.436286 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.436822 kubelet[2758]: E0513 12:53:16.436329 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.436822 kubelet[2758]: E0513 12:53:16.436602 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.436822 kubelet[2758]: W0513 12:53:16.436612 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.436822 kubelet[2758]: E0513 12:53:16.436625 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.436822 kubelet[2758]: E0513 12:53:16.436803 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.436822 kubelet[2758]: W0513 12:53:16.436813 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.437831 kubelet[2758]: E0513 12:53:16.436823 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.437831 kubelet[2758]: E0513 12:53:16.437260 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.437831 kubelet[2758]: W0513 12:53:16.437271 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.438647 kubelet[2758]: E0513 12:53:16.438624 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.438647 kubelet[2758]: W0513 12:53:16.438639 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.438727 kubelet[2758]: E0513 12:53:16.438651 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.439034 kubelet[2758]: E0513 12:53:16.438803 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.439034 kubelet[2758]: E0513 12:53:16.438830 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.439034 kubelet[2758]: W0513 12:53:16.438840 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.439034 kubelet[2758]: E0513 12:53:16.438848 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.439468 kubelet[2758]: E0513 12:53:16.439424 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.439866 kubelet[2758]: W0513 12:53:16.439631 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.439866 kubelet[2758]: E0513 12:53:16.439655 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.441220 kubelet[2758]: E0513 12:53:16.441192 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.441220 kubelet[2758]: W0513 12:53:16.441205 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.441536 kubelet[2758]: E0513 12:53:16.441522 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.442004 kubelet[2758]: E0513 12:53:16.441917 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.442004 kubelet[2758]: W0513 12:53:16.441928 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.442004 kubelet[2758]: E0513 12:53:16.441972 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.442322 kubelet[2758]: E0513 12:53:16.442280 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.442322 kubelet[2758]: W0513 12:53:16.442292 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.442387 kubelet[2758]: E0513 12:53:16.442321 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.443148 kubelet[2758]: E0513 12:53:16.443083 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.443148 kubelet[2758]: W0513 12:53:16.443095 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.443148 kubelet[2758]: E0513 12:53:16.443131 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.443796 kubelet[2758]: E0513 12:53:16.443605 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.443796 kubelet[2758]: W0513 12:53:16.443618 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.443796 kubelet[2758]: E0513 12:53:16.443654 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.444738 kubelet[2758]: E0513 12:53:16.444688 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.444738 kubelet[2758]: W0513 12:53:16.444700 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.444814 kubelet[2758]: E0513 12:53:16.444733 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.445071 kubelet[2758]: E0513 12:53:16.445045 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.445222 kubelet[2758]: W0513 12:53:16.445171 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.445222 kubelet[2758]: E0513 12:53:16.445213 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.445612 kubelet[2758]: E0513 12:53:16.445531 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.445612 kubelet[2758]: W0513 12:53:16.445543 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.445612 kubelet[2758]: E0513 12:53:16.445589 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.446379 kubelet[2758]: E0513 12:53:16.446356 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.447519 kubelet[2758]: W0513 12:53:16.446652 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.448020 kubelet[2758]: E0513 12:53:16.447679 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.448020 kubelet[2758]: W0513 12:53:16.447692 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.448020 kubelet[2758]: E0513 12:53:16.447797 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.448020 kubelet[2758]: W0513 12:53:16.447804 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.448020 kubelet[2758]: E0513 12:53:16.447903 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.448020 kubelet[2758]: W0513 12:53:16.447911 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.448020 kubelet[2758]: E0513 12:53:16.447922 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.448277 kubelet[2758]: E0513 12:53:16.448265 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.448345 kubelet[2758]: W0513 12:53:16.448333 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.448409 kubelet[2758]: E0513 12:53:16.448396 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.448517 kubelet[2758]: E0513 12:53:16.448504 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.448687 kubelet[2758]: E0513 12:53:16.448676 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.448758 kubelet[2758]: W0513 12:53:16.448746 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.449450 kubelet[2758]: E0513 12:53:16.449349 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.449450 kubelet[2758]: E0513 12:53:16.449370 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.449450 kubelet[2758]: E0513 12:53:16.448919 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.449956 kubelet[2758]: E0513 12:53:16.449685 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.449956 kubelet[2758]: W0513 12:53:16.449697 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.449956 kubelet[2758]: E0513 12:53:16.449711 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.449956 kubelet[2758]: E0513 12:53:16.449879 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.449956 kubelet[2758]: W0513 12:53:16.449888 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.449956 kubelet[2758]: E0513 12:53:16.449896 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.450834 kubelet[2758]: E0513 12:53:16.450683 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.450834 kubelet[2758]: W0513 12:53:16.450695 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.450834 kubelet[2758]: E0513 12:53:16.450731 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.464266 containerd[1551]: time="2025-05-13T12:53:16.464088147Z" level=info msg="connecting to shim 4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887" address="unix:///run/containerd/s/01172bc7d95654aed456b2da137fc8f999d5e7da0c63501d7b604d2b6af067d0" namespace=k8s.io protocol=ttrpc version=3 May 13 12:53:16.465122 kubelet[2758]: E0513 12:53:16.465094 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:16.465122 kubelet[2758]: W0513 12:53:16.465115 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:16.465249 kubelet[2758]: E0513 12:53:16.465134 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:16.502963 systemd[1]: Started cri-containerd-4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887.scope - libcontainer container 4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887. May 13 12:53:16.536401 containerd[1551]: time="2025-05-13T12:53:16.536354188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65647948b5-wjdvk,Uid:9d91234d-6fa7-4e5c-bb83-ea22dc59ccf0,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e69dd8fb7c01b1bc8e6b7ecb011ae4339577073a4c3920748862d00183f24b8\"" May 13 12:53:16.538616 containerd[1551]: time="2025-05-13T12:53:16.538538778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 12:53:16.568376 containerd[1551]: time="2025-05-13T12:53:16.568294210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kbk5r,Uid:feedf370-0fc9-486b-b036-46857f8e48d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887\"" May 13 12:53:17.640759 kubelet[2758]: E0513 12:53:17.640706 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:17.640759 kubelet[2758]: W0513 12:53:17.640730 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:17.641469 kubelet[2758]: E0513 12:53:17.640778 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:17.641469 kubelet[2758]: E0513 12:53:17.640983 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:17.641469 kubelet[2758]: W0513 12:53:17.640992 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:17.641469 kubelet[2758]: E0513 12:53:17.641002 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:17.641469 kubelet[2758]: E0513 12:53:17.641177 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:17.641469 kubelet[2758]: W0513 12:53:17.641214 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:17.641469 kubelet[2758]: E0513 12:53:17.641224 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:17.641469 kubelet[2758]: E0513 12:53:17.641431 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:17.641469 kubelet[2758]: W0513 12:53:17.641439 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:17.641469 kubelet[2758]: E0513 12:53:17.641463 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:17.642120 kubelet[2758]: E0513 12:53:17.641699 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:17.642120 kubelet[2758]: W0513 12:53:17.641709 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:17.642120 kubelet[2758]: E0513 12:53:17.641718 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:18.457626 kubelet[2758]: E0513 12:53:18.456879 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gsp6z" podUID="49521272-f372-4acc-b36d-7e519fb5603a" May 13 12:53:18.660651 kubelet[2758]: E0513 12:53:18.660591 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:18.665032 kubelet[2758]: W0513 12:53:18.662091 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:18.665032 kubelet[2758]: E0513 12:53:18.662134 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:18.665032 kubelet[2758]: E0513 12:53:18.662463 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:18.665032 kubelet[2758]: W0513 12:53:18.662529 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:18.665032 kubelet[2758]: E0513 12:53:18.662551 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:18.665032 kubelet[2758]: E0513 12:53:18.663852 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:18.665032 kubelet[2758]: W0513 12:53:18.663880 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:18.665032 kubelet[2758]: E0513 12:53:18.663914 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:18.667209 kubelet[2758]: E0513 12:53:18.665670 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:18.667209 kubelet[2758]: W0513 12:53:18.665789 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:18.667209 kubelet[2758]: E0513 12:53:18.665877 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:18.667889 kubelet[2758]: E0513 12:53:18.667585 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:18.667889 kubelet[2758]: W0513 12:53:18.667653 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:18.667889 kubelet[2758]: E0513 12:53:18.667682 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:19.921359 containerd[1551]: time="2025-05-13T12:53:19.921236902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:19.922739 containerd[1551]: time="2025-05-13T12:53:19.922537671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 13 12:53:19.924126 containerd[1551]: time="2025-05-13T12:53:19.924094156Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:19.927794 containerd[1551]: time="2025-05-13T12:53:19.927464056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:19.928659 containerd[1551]: time="2025-05-13T12:53:19.928628119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.389887031s" May 13 12:53:19.928845 containerd[1551]: time="2025-05-13T12:53:19.928742959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 13 12:53:19.930920 containerd[1551]: time="2025-05-13T12:53:19.930703743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 12:53:19.946875 containerd[1551]: time="2025-05-13T12:53:19.946510195Z" level=info msg="CreateContainer within sandbox \"8e69dd8fb7c01b1bc8e6b7ecb011ae4339577073a4c3920748862d00183f24b8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 12:53:19.964762 containerd[1551]: time="2025-05-13T12:53:19.963727588Z" level=info msg="Container 919c00044bca08cc5de60fc3d836c1d92f0b819de27040becf1bf0c656eaddb2: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:19.978141 containerd[1551]: time="2025-05-13T12:53:19.978103960Z" level=info msg="CreateContainer within sandbox \"8e69dd8fb7c01b1bc8e6b7ecb011ae4339577073a4c3920748862d00183f24b8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"919c00044bca08cc5de60fc3d836c1d92f0b819de27040becf1bf0c656eaddb2\"" May 13 12:53:19.978763 containerd[1551]: time="2025-05-13T12:53:19.978744023Z" level=info msg="StartContainer for \"919c00044bca08cc5de60fc3d836c1d92f0b819de27040becf1bf0c656eaddb2\"" May 13 12:53:19.981111 containerd[1551]: time="2025-05-13T12:53:19.981019397Z" level=info msg="connecting to shim 919c00044bca08cc5de60fc3d836c1d92f0b819de27040becf1bf0c656eaddb2" address="unix:///run/containerd/s/1505afb2c394d905e593c2babf1275ee13f619b145bf6baef2573c849498b4a2" protocol=ttrpc version=3 May 13 12:53:20.010628 systemd[1]: Started cri-containerd-919c00044bca08cc5de60fc3d836c1d92f0b819de27040becf1bf0c656eaddb2.scope - libcontainer container 919c00044bca08cc5de60fc3d836c1d92f0b819de27040becf1bf0c656eaddb2. May 13 12:53:20.068881 containerd[1551]: time="2025-05-13T12:53:20.068849935Z" level=info msg="StartContainer for \"919c00044bca08cc5de60fc3d836c1d92f0b819de27040becf1bf0c656eaddb2\" returns successfully" May 13 12:53:20.447862 kubelet[2758]: E0513 12:53:20.447166 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gsp6z" podUID="49521272-f372-4acc-b36d-7e519fb5603a" May 13 12:53:20.680110 kubelet[2758]: E0513 12:53:20.680025 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.680110 kubelet[2758]: W0513 12:53:20.680048 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.680110 kubelet[2758]: E0513 12:53:20.680067 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.680576 kubelet[2758]: E0513 12:53:20.680510 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.680576 kubelet[2758]: W0513 12:53:20.680528 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.680576 kubelet[2758]: E0513 12:53:20.680539 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.680894 kubelet[2758]: E0513 12:53:20.680834 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.680894 kubelet[2758]: W0513 12:53:20.680846 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.680894 kubelet[2758]: E0513 12:53:20.680856 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.681176 kubelet[2758]: E0513 12:53:20.681141 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.681176 kubelet[2758]: W0513 12:53:20.681151 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.681176 kubelet[2758]: E0513 12:53:20.681161 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.681469 kubelet[2758]: E0513 12:53:20.681421 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.681469 kubelet[2758]: W0513 12:53:20.681432 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.681469 kubelet[2758]: E0513 12:53:20.681440 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.681766 kubelet[2758]: E0513 12:53:20.681711 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.681766 kubelet[2758]: W0513 12:53:20.681722 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.681766 kubelet[2758]: E0513 12:53:20.681734 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.682030 kubelet[2758]: E0513 12:53:20.681977 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.682030 kubelet[2758]: W0513 12:53:20.681988 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.682030 kubelet[2758]: E0513 12:53:20.681996 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.682291 kubelet[2758]: E0513 12:53:20.682259 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.682291 kubelet[2758]: W0513 12:53:20.682270 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.682291 kubelet[2758]: E0513 12:53:20.682279 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.682663 kubelet[2758]: E0513 12:53:20.682604 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.682663 kubelet[2758]: W0513 12:53:20.682614 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.682663 kubelet[2758]: E0513 12:53:20.682623 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.682910 kubelet[2758]: E0513 12:53:20.682900 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.683009 kubelet[2758]: W0513 12:53:20.682962 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.683009 kubelet[2758]: E0513 12:53:20.682976 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.683256 kubelet[2758]: E0513 12:53:20.683203 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.683256 kubelet[2758]: W0513 12:53:20.683214 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.683256 kubelet[2758]: E0513 12:53:20.683222 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.683495 kubelet[2758]: E0513 12:53:20.683465 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.683623 kubelet[2758]: W0513 12:53:20.683546 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.683623 kubelet[2758]: E0513 12:53:20.683560 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.683935 kubelet[2758]: E0513 12:53:20.683861 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.683935 kubelet[2758]: W0513 12:53:20.683871 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.684093 kubelet[2758]: E0513 12:53:20.683880 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.684252 kubelet[2758]: E0513 12:53:20.684197 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.684252 kubelet[2758]: W0513 12:53:20.684208 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.684252 kubelet[2758]: E0513 12:53:20.684217 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.684516 kubelet[2758]: E0513 12:53:20.684472 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.684684 kubelet[2758]: W0513 12:53:20.684500 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.684684 kubelet[2758]: E0513 12:53:20.684586 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.684934 kubelet[2758]: E0513 12:53:20.684899 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.684934 kubelet[2758]: W0513 12:53:20.684910 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.684934 kubelet[2758]: E0513 12:53:20.684920 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.685262 kubelet[2758]: E0513 12:53:20.685252 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.685345 kubelet[2758]: W0513 12:53:20.685319 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.685448 kubelet[2758]: E0513 12:53:20.685437 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.685697 kubelet[2758]: E0513 12:53:20.685684 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.685774 kubelet[2758]: W0513 12:53:20.685749 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.685923 kubelet[2758]: E0513 12:53:20.685868 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.686051 kubelet[2758]: E0513 12:53:20.686041 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.686188 kubelet[2758]: W0513 12:53:20.686101 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.686188 kubelet[2758]: E0513 12:53:20.686129 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.686434 kubelet[2758]: E0513 12:53:20.686412 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.686434 kubelet[2758]: W0513 12:53:20.686422 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.686608 kubelet[2758]: E0513 12:53:20.686531 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.686802 kubelet[2758]: E0513 12:53:20.686779 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.686802 kubelet[2758]: W0513 12:53:20.686790 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.686970 kubelet[2758]: E0513 12:53:20.686952 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.687124 kubelet[2758]: E0513 12:53:20.687114 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.687199 kubelet[2758]: W0513 12:53:20.687174 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.687274 kubelet[2758]: E0513 12:53:20.687263 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.687505 kubelet[2758]: E0513 12:53:20.687454 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.687505 kubelet[2758]: W0513 12:53:20.687465 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.687698 kubelet[2758]: E0513 12:53:20.687685 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.687819 kubelet[2758]: E0513 12:53:20.687809 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.687899 kubelet[2758]: W0513 12:53:20.687882 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.687965 kubelet[2758]: E0513 12:53:20.687953 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.688182 kubelet[2758]: E0513 12:53:20.688172 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.688303 kubelet[2758]: W0513 12:53:20.688239 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.688303 kubelet[2758]: E0513 12:53:20.688268 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.688593 kubelet[2758]: E0513 12:53:20.688567 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.688593 kubelet[2758]: W0513 12:53:20.688579 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.688750 kubelet[2758]: E0513 12:53:20.688694 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.688984 kubelet[2758]: E0513 12:53:20.688951 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.688984 kubelet[2758]: W0513 12:53:20.688963 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.689410 kubelet[2758]: E0513 12:53:20.689085 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.689635 kubelet[2758]: E0513 12:53:20.689598 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.689635 kubelet[2758]: W0513 12:53:20.689626 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.689745 kubelet[2758]: E0513 12:53:20.689714 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.689809 kubelet[2758]: E0513 12:53:20.689795 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.689809 kubelet[2758]: W0513 12:53:20.689807 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.689918 kubelet[2758]: E0513 12:53:20.689901 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.690071 kubelet[2758]: E0513 12:53:20.690056 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.690071 kubelet[2758]: W0513 12:53:20.690070 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.690139 kubelet[2758]: E0513 12:53:20.690084 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.690523 kubelet[2758]: E0513 12:53:20.690391 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.690523 kubelet[2758]: W0513 12:53:20.690403 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.690523 kubelet[2758]: E0513 12:53:20.690419 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.690638 kubelet[2758]: E0513 12:53:20.690625 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.690638 kubelet[2758]: W0513 12:53:20.690633 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.690688 kubelet[2758]: E0513 12:53:20.690643 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:20.690904 kubelet[2758]: E0513 12:53:20.690893 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:20.690964 kubelet[2758]: W0513 12:53:20.690954 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:20.691025 kubelet[2758]: E0513 12:53:20.691014 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.596080 kubelet[2758]: E0513 12:53:21.595978 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.596080 kubelet[2758]: W0513 12:53:21.596021 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.596080 kubelet[2758]: E0513 12:53:21.596054 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.600892 kubelet[2758]: E0513 12:53:21.597383 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.600892 kubelet[2758]: W0513 12:53:21.597406 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.600892 kubelet[2758]: E0513 12:53:21.597430 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.600892 kubelet[2758]: E0513 12:53:21.599666 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.600892 kubelet[2758]: W0513 12:53:21.600557 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.600892 kubelet[2758]: E0513 12:53:21.600598 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.602887 kubelet[2758]: E0513 12:53:21.602149 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.602887 kubelet[2758]: W0513 12:53:21.602174 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.602887 kubelet[2758]: E0513 12:53:21.602254 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.604865 kubelet[2758]: E0513 12:53:21.604790 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.604987 kubelet[2758]: W0513 12:53:21.604879 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.604987 kubelet[2758]: E0513 12:53:21.604908 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.605874 kubelet[2758]: E0513 12:53:21.605807 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.606004 kubelet[2758]: W0513 12:53:21.605838 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.606004 kubelet[2758]: E0513 12:53:21.605915 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.607005 kubelet[2758]: E0513 12:53:21.606961 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.607545 kubelet[2758]: W0513 12:53:21.606995 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.607545 kubelet[2758]: E0513 12:53:21.607186 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.608713 kubelet[2758]: E0513 12:53:21.608637 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.608713 kubelet[2758]: W0513 12:53:21.608671 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.608713 kubelet[2758]: E0513 12:53:21.608697 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.611775 kubelet[2758]: E0513 12:53:21.611709 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.612022 kubelet[2758]: W0513 12:53:21.611971 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.612257 kubelet[2758]: E0513 12:53:21.612216 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.614122 kubelet[2758]: E0513 12:53:21.613561 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.614122 kubelet[2758]: W0513 12:53:21.613592 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.614122 kubelet[2758]: E0513 12:53:21.613620 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.615005 kubelet[2758]: E0513 12:53:21.614833 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.615698 kubelet[2758]: W0513 12:53:21.615231 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.615698 kubelet[2758]: E0513 12:53:21.615266 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.616694 kubelet[2758]: E0513 12:53:21.616640 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.617532 kubelet[2758]: W0513 12:53:21.617126 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.617532 kubelet[2758]: E0513 12:53:21.617169 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.619156 kubelet[2758]: E0513 12:53:21.619125 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.619333 kubelet[2758]: W0513 12:53:21.619305 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.619469 kubelet[2758]: E0513 12:53:21.619444 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.621394 kubelet[2758]: E0513 12:53:21.620377 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.621394 kubelet[2758]: W0513 12:53:21.620408 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.621394 kubelet[2758]: E0513 12:53:21.620431 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.622363 kubelet[2758]: E0513 12:53:21.622094 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.622363 kubelet[2758]: W0513 12:53:21.622123 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.622363 kubelet[2758]: E0513 12:53:21.622146 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.648403 kubelet[2758]: I0513 12:53:21.646290 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65647948b5-wjdvk" podStartSLOduration=3.254199125 podStartE2EDuration="6.646266104s" podCreationTimestamp="2025-05-13 12:53:15 +0000 UTC" firstStartedPulling="2025-05-13 12:53:16.538047658 +0000 UTC m=+12.361195858" lastFinishedPulling="2025-05-13 12:53:19.930114637 +0000 UTC m=+15.753262837" observedRunningTime="2025-05-13 12:53:20.62279415 +0000 UTC m=+16.445942380" watchObservedRunningTime="2025-05-13 12:53:21.646266104 +0000 UTC m=+17.469414334" May 13 12:53:21.694571 kubelet[2758]: E0513 12:53:21.694541 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.694571 kubelet[2758]: W0513 12:53:21.694568 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.694726 kubelet[2758]: E0513 12:53:21.694589 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.694900 kubelet[2758]: E0513 12:53:21.694873 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.694900 kubelet[2758]: W0513 12:53:21.694897 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.694966 kubelet[2758]: E0513 12:53:21.694926 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.695169 kubelet[2758]: E0513 12:53:21.695154 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.695209 kubelet[2758]: W0513 12:53:21.695171 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.695209 kubelet[2758]: E0513 12:53:21.695188 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.695403 kubelet[2758]: E0513 12:53:21.695376 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.695403 kubelet[2758]: W0513 12:53:21.695393 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.695466 kubelet[2758]: E0513 12:53:21.695441 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.695701 kubelet[2758]: E0513 12:53:21.695659 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.695701 kubelet[2758]: W0513 12:53:21.695700 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.695761 kubelet[2758]: E0513 12:53:21.695718 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.695923 kubelet[2758]: E0513 12:53:21.695905 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.695923 kubelet[2758]: W0513 12:53:21.695918 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.696031 kubelet[2758]: E0513 12:53:21.695970 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.696309 kubelet[2758]: E0513 12:53:21.696178 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.696309 kubelet[2758]: W0513 12:53:21.696187 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.696309 kubelet[2758]: E0513 12:53:21.696230 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.697494 kubelet[2758]: E0513 12:53:21.696588 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.697494 kubelet[2758]: W0513 12:53:21.696602 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.697494 kubelet[2758]: E0513 12:53:21.696612 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.697494 kubelet[2758]: E0513 12:53:21.696916 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.697494 kubelet[2758]: W0513 12:53:21.696925 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.697494 kubelet[2758]: E0513 12:53:21.696935 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.697494 kubelet[2758]: E0513 12:53:21.697207 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.697494 kubelet[2758]: W0513 12:53:21.697216 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.697494 kubelet[2758]: E0513 12:53:21.697270 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.697743 kubelet[2758]: E0513 12:53:21.697574 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.697743 kubelet[2758]: W0513 12:53:21.697584 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.697743 kubelet[2758]: E0513 12:53:21.697603 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.697989 kubelet[2758]: E0513 12:53:21.697970 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.697989 kubelet[2758]: W0513 12:53:21.697984 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.698049 kubelet[2758]: E0513 12:53:21.697996 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.698189 kubelet[2758]: E0513 12:53:21.698171 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.698228 kubelet[2758]: W0513 12:53:21.698204 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.698228 kubelet[2758]: E0513 12:53:21.698222 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.698403 kubelet[2758]: E0513 12:53:21.698387 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.698403 kubelet[2758]: W0513 12:53:21.698399 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.698459 kubelet[2758]: E0513 12:53:21.698419 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.698626 kubelet[2758]: E0513 12:53:21.698609 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.698661 kubelet[2758]: W0513 12:53:21.698638 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.698661 kubelet[2758]: E0513 12:53:21.698658 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.698844 kubelet[2758]: E0513 12:53:21.698827 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.698844 kubelet[2758]: W0513 12:53:21.698839 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.698909 kubelet[2758]: E0513 12:53:21.698857 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.699116 kubelet[2758]: E0513 12:53:21.699087 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.699116 kubelet[2758]: W0513 12:53:21.699099 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.699170 kubelet[2758]: E0513 12:53:21.699116 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.699298 kubelet[2758]: E0513 12:53:21.699278 2758 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 12:53:21.699298 kubelet[2758]: W0513 12:53:21.699291 2758 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 12:53:21.699354 kubelet[2758]: E0513 12:53:21.699299 2758 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 12:53:21.906732 containerd[1551]: time="2025-05-13T12:53:21.906623618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:21.908835 containerd[1551]: time="2025-05-13T12:53:21.908798675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 13 12:53:21.910525 containerd[1551]: time="2025-05-13T12:53:21.910472736Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:21.913935 containerd[1551]: time="2025-05-13T12:53:21.913893752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:21.914455 containerd[1551]: time="2025-05-13T12:53:21.914289537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.983547964s" May 13 12:53:21.914455 containerd[1551]: time="2025-05-13T12:53:21.914321023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 13 12:53:21.917326 containerd[1551]: time="2025-05-13T12:53:21.917033049Z" level=info msg="CreateContainer within sandbox \"4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 12:53:21.935870 containerd[1551]: time="2025-05-13T12:53:21.934702433Z" level=info msg="Container 26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:21.940663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986959933.mount: Deactivated successfully. May 13 12:53:21.949884 containerd[1551]: time="2025-05-13T12:53:21.949852183Z" level=info msg="CreateContainer within sandbox \"4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39\"" May 13 12:53:21.952572 containerd[1551]: time="2025-05-13T12:53:21.951525021Z" level=info msg="StartContainer for \"26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39\"" May 13 12:53:21.953372 containerd[1551]: time="2025-05-13T12:53:21.953350880Z" level=info msg="connecting to shim 26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39" address="unix:///run/containerd/s/01172bc7d95654aed456b2da137fc8f999d5e7da0c63501d7b604d2b6af067d0" protocol=ttrpc version=3 May 13 12:53:21.983630 systemd[1]: Started cri-containerd-26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39.scope - libcontainer container 26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39. May 13 12:53:22.026346 containerd[1551]: time="2025-05-13T12:53:22.026253409Z" level=info msg="StartContainer for \"26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39\" returns successfully" May 13 12:53:22.037807 systemd[1]: cri-containerd-26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39.scope: Deactivated successfully. May 13 12:53:22.042747 containerd[1551]: time="2025-05-13T12:53:22.042687787Z" level=info msg="received exit event container_id:\"26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39\" id:\"26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39\" pid:3401 exited_at:{seconds:1747140802 nanos:42309199}" May 13 12:53:22.042944 containerd[1551]: time="2025-05-13T12:53:22.042898675Z" level=info msg="TaskExit event in podsandbox handler container_id:\"26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39\" id:\"26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39\" pid:3401 exited_at:{seconds:1747140802 nanos:42309199}" May 13 12:53:22.070178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39-rootfs.mount: Deactivated successfully. May 13 12:53:22.459512 kubelet[2758]: E0513 12:53:22.458868 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gsp6z" podUID="49521272-f372-4acc-b36d-7e519fb5603a" May 13 12:53:23.614586 containerd[1551]: time="2025-05-13T12:53:23.614506343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 12:53:24.449993 kubelet[2758]: E0513 12:53:24.448837 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gsp6z" podUID="49521272-f372-4acc-b36d-7e519fb5603a" May 13 12:53:26.449389 kubelet[2758]: E0513 12:53:26.449329 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gsp6z" podUID="49521272-f372-4acc-b36d-7e519fb5603a" May 13 12:53:28.448302 kubelet[2758]: E0513 12:53:28.448098 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gsp6z" podUID="49521272-f372-4acc-b36d-7e519fb5603a" May 13 12:53:29.764962 containerd[1551]: time="2025-05-13T12:53:29.764903863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:29.766750 containerd[1551]: time="2025-05-13T12:53:29.766646885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 13 12:53:29.768302 containerd[1551]: time="2025-05-13T12:53:29.768258940Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:29.771831 containerd[1551]: time="2025-05-13T12:53:29.771692028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:29.773070 containerd[1551]: time="2025-05-13T12:53:29.772865273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.158285737s" May 13 12:53:29.773070 containerd[1551]: time="2025-05-13T12:53:29.772925576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 13 12:53:29.778866 containerd[1551]: time="2025-05-13T12:53:29.778791080Z" level=info msg="CreateContainer within sandbox \"4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 12:53:29.798904 containerd[1551]: time="2025-05-13T12:53:29.798849567Z" level=info msg="Container 8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:29.814333 containerd[1551]: time="2025-05-13T12:53:29.814233061Z" level=info msg="CreateContainer within sandbox \"4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e\"" May 13 12:53:29.816260 containerd[1551]: time="2025-05-13T12:53:29.816218511Z" level=info msg="StartContainer for \"8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e\"" May 13 12:53:29.818188 containerd[1551]: time="2025-05-13T12:53:29.818121581Z" level=info msg="connecting to shim 8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e" address="unix:///run/containerd/s/01172bc7d95654aed456b2da137fc8f999d5e7da0c63501d7b604d2b6af067d0" protocol=ttrpc version=3 May 13 12:53:29.845668 systemd[1]: Started cri-containerd-8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e.scope - libcontainer container 8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e. May 13 12:53:29.903616 containerd[1551]: time="2025-05-13T12:53:29.903577625Z" level=info msg="StartContainer for \"8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e\" returns successfully" May 13 12:53:30.455430 kubelet[2758]: E0513 12:53:30.455275 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gsp6z" podUID="49521272-f372-4acc-b36d-7e519fb5603a" May 13 12:53:30.998182 containerd[1551]: time="2025-05-13T12:53:30.997971608Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:53:31.001345 systemd[1]: cri-containerd-8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e.scope: Deactivated successfully. May 13 12:53:31.002098 systemd[1]: cri-containerd-8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e.scope: Consumed 660ms CPU time, 183.7M memory peak, 154M written to disk. May 13 12:53:31.007042 containerd[1551]: time="2025-05-13T12:53:31.006984333Z" level=info msg="received exit event container_id:\"8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e\" id:\"8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e\" pid:3462 exited_at:{seconds:1747140811 nanos:6567651}" May 13 12:53:31.007415 containerd[1551]: time="2025-05-13T12:53:31.007219423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e\" id:\"8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e\" pid:3462 exited_at:{seconds:1747140811 nanos:6567651}" May 13 12:53:31.043734 kubelet[2758]: I0513 12:53:31.042598 2758 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 12:53:31.048187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e-rootfs.mount: Deactivated successfully. May 13 12:53:31.907418 systemd[1]: Created slice kubepods-burstable-pod51adfee0_0257_46dd_818d_dc755eed294c.slice - libcontainer container kubepods-burstable-pod51adfee0_0257_46dd_818d_dc755eed294c.slice. May 13 12:53:31.954972 systemd[1]: Created slice kubepods-besteffort-podf7acec06_24e7_499d_af08_e8d75e498103.slice - libcontainer container kubepods-besteffort-podf7acec06_24e7_499d_af08_e8d75e498103.slice. May 13 12:53:31.963987 systemd[1]: Created slice kubepods-besteffort-podab33d1a6_7348_4ce5_8818_fbb6db783449.slice - libcontainer container kubepods-besteffort-podab33d1a6_7348_4ce5_8818_fbb6db783449.slice. May 13 12:53:31.972781 systemd[1]: Created slice kubepods-burstable-pod1d6d4ece_ad85_44ab_b7d8_e8707b058cb2.slice - libcontainer container kubepods-burstable-pod1d6d4ece_ad85_44ab_b7d8_e8707b058cb2.slice. May 13 12:53:31.975530 kubelet[2758]: I0513 12:53:31.975492 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51adfee0-0257-46dd-818d-dc755eed294c-config-volume\") pod \"coredns-668d6bf9bc-tvf49\" (UID: \"51adfee0-0257-46dd-818d-dc755eed294c\") " pod="kube-system/coredns-668d6bf9bc-tvf49" May 13 12:53:31.976823 kubelet[2758]: I0513 12:53:31.975530 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jc7g\" (UniqueName: \"kubernetes.io/projected/51adfee0-0257-46dd-818d-dc755eed294c-kube-api-access-8jc7g\") pod \"coredns-668d6bf9bc-tvf49\" (UID: \"51adfee0-0257-46dd-818d-dc755eed294c\") " pod="kube-system/coredns-668d6bf9bc-tvf49" May 13 12:53:31.982471 systemd[1]: Created slice kubepods-besteffort-pod6e68020b_8e9f_4fd5_b1d5_7adff9fb7f59.slice - libcontainer container kubepods-besteffort-pod6e68020b_8e9f_4fd5_b1d5_7adff9fb7f59.slice. May 13 12:53:32.077546 kubelet[2758]: I0513 12:53:32.076692 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h5sp\" (UniqueName: \"kubernetes.io/projected/6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59-kube-api-access-2h5sp\") pod \"calico-kube-controllers-68947865bd-bbqw2\" (UID: \"6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59\") " pod="calico-system/calico-kube-controllers-68947865bd-bbqw2" May 13 12:53:32.077546 kubelet[2758]: I0513 12:53:32.076768 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d6d4ece-ad85-44ab-b7d8-e8707b058cb2-config-volume\") pod \"coredns-668d6bf9bc-8glxb\" (UID: \"1d6d4ece-ad85-44ab-b7d8-e8707b058cb2\") " pod="kube-system/coredns-668d6bf9bc-8glxb" May 13 12:53:32.077546 kubelet[2758]: I0513 12:53:32.076839 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gn9c\" (UniqueName: \"kubernetes.io/projected/f7acec06-24e7-499d-af08-e8d75e498103-kube-api-access-5gn9c\") pod \"calico-apiserver-749c7f87b9-59gx2\" (UID: \"f7acec06-24e7-499d-af08-e8d75e498103\") " pod="calico-apiserver/calico-apiserver-749c7f87b9-59gx2" May 13 12:53:32.077546 kubelet[2758]: I0513 12:53:32.076880 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-576lh\" (UniqueName: \"kubernetes.io/projected/1d6d4ece-ad85-44ab-b7d8-e8707b058cb2-kube-api-access-576lh\") pod \"coredns-668d6bf9bc-8glxb\" (UID: \"1d6d4ece-ad85-44ab-b7d8-e8707b058cb2\") " pod="kube-system/coredns-668d6bf9bc-8glxb" May 13 12:53:32.077546 kubelet[2758]: I0513 12:53:32.076938 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59-tigera-ca-bundle\") pod \"calico-kube-controllers-68947865bd-bbqw2\" (UID: \"6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59\") " pod="calico-system/calico-kube-controllers-68947865bd-bbqw2" May 13 12:53:32.078562 kubelet[2758]: I0513 12:53:32.076972 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab33d1a6-7348-4ce5-8818-fbb6db783449-calico-apiserver-certs\") pod \"calico-apiserver-749c7f87b9-z9bcd\" (UID: \"ab33d1a6-7348-4ce5-8818-fbb6db783449\") " pod="calico-apiserver/calico-apiserver-749c7f87b9-z9bcd" May 13 12:53:32.078562 kubelet[2758]: I0513 12:53:32.077005 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f7acec06-24e7-499d-af08-e8d75e498103-calico-apiserver-certs\") pod \"calico-apiserver-749c7f87b9-59gx2\" (UID: \"f7acec06-24e7-499d-af08-e8d75e498103\") " pod="calico-apiserver/calico-apiserver-749c7f87b9-59gx2" May 13 12:53:32.078562 kubelet[2758]: I0513 12:53:32.077101 2758 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgf7d\" (UniqueName: \"kubernetes.io/projected/ab33d1a6-7348-4ce5-8818-fbb6db783449-kube-api-access-vgf7d\") pod \"calico-apiserver-749c7f87b9-z9bcd\" (UID: \"ab33d1a6-7348-4ce5-8818-fbb6db783449\") " pod="calico-apiserver/calico-apiserver-749c7f87b9-z9bcd" May 13 12:53:32.251829 containerd[1551]: time="2025-05-13T12:53:32.251555946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tvf49,Uid:51adfee0-0257-46dd-818d-dc755eed294c,Namespace:kube-system,Attempt:0,}" May 13 12:53:32.263322 containerd[1551]: time="2025-05-13T12:53:32.263266572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749c7f87b9-59gx2,Uid:f7acec06-24e7-499d-af08-e8d75e498103,Namespace:calico-apiserver,Attempt:0,}" May 13 12:53:32.270321 containerd[1551]: time="2025-05-13T12:53:32.269648444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749c7f87b9-z9bcd,Uid:ab33d1a6-7348-4ce5-8818-fbb6db783449,Namespace:calico-apiserver,Attempt:0,}" May 13 12:53:32.284068 containerd[1551]: time="2025-05-13T12:53:32.284015801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8glxb,Uid:1d6d4ece-ad85-44ab-b7d8-e8707b058cb2,Namespace:kube-system,Attempt:0,}" May 13 12:53:32.288141 containerd[1551]: time="2025-05-13T12:53:32.288107359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68947865bd-bbqw2,Uid:6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59,Namespace:calico-system,Attempt:0,}" May 13 12:53:32.381539 containerd[1551]: time="2025-05-13T12:53:32.381500870Z" level=error msg="Failed to destroy network for sandbox \"42db382a5cfb50988480ab1b691462c945ca739f94793648f224d7be28aee725\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.383582 containerd[1551]: time="2025-05-13T12:53:32.383550742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tvf49,Uid:51adfee0-0257-46dd-818d-dc755eed294c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42db382a5cfb50988480ab1b691462c945ca739f94793648f224d7be28aee725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.384520 kubelet[2758]: E0513 12:53:32.383872 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42db382a5cfb50988480ab1b691462c945ca739f94793648f224d7be28aee725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.384520 kubelet[2758]: E0513 12:53:32.383950 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42db382a5cfb50988480ab1b691462c945ca739f94793648f224d7be28aee725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tvf49" May 13 12:53:32.384520 kubelet[2758]: E0513 12:53:32.383983 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42db382a5cfb50988480ab1b691462c945ca739f94793648f224d7be28aee725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tvf49" May 13 12:53:32.384648 kubelet[2758]: E0513 12:53:32.384027 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tvf49_kube-system(51adfee0-0257-46dd-818d-dc755eed294c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tvf49_kube-system(51adfee0-0257-46dd-818d-dc755eed294c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42db382a5cfb50988480ab1b691462c945ca739f94793648f224d7be28aee725\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tvf49" podUID="51adfee0-0257-46dd-818d-dc755eed294c" May 13 12:53:32.387323 containerd[1551]: time="2025-05-13T12:53:32.387289301Z" level=error msg="Failed to destroy network for sandbox \"d707cae43fa33e223324feeb4f8f0ae293bb448423096fe7ad4c3921eba6fd9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.390133 containerd[1551]: time="2025-05-13T12:53:32.390093103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749c7f87b9-59gx2,Uid:f7acec06-24e7-499d-af08-e8d75e498103,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d707cae43fa33e223324feeb4f8f0ae293bb448423096fe7ad4c3921eba6fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.390545 kubelet[2758]: E0513 12:53:32.390452 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d707cae43fa33e223324feeb4f8f0ae293bb448423096fe7ad4c3921eba6fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.390699 kubelet[2758]: E0513 12:53:32.390668 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d707cae43fa33e223324feeb4f8f0ae293bb448423096fe7ad4c3921eba6fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749c7f87b9-59gx2" May 13 12:53:32.390881 kubelet[2758]: E0513 12:53:32.390777 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d707cae43fa33e223324feeb4f8f0ae293bb448423096fe7ad4c3921eba6fd9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749c7f87b9-59gx2" May 13 12:53:32.390881 kubelet[2758]: E0513 12:53:32.390839 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-749c7f87b9-59gx2_calico-apiserver(f7acec06-24e7-499d-af08-e8d75e498103)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-749c7f87b9-59gx2_calico-apiserver(f7acec06-24e7-499d-af08-e8d75e498103)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d707cae43fa33e223324feeb4f8f0ae293bb448423096fe7ad4c3921eba6fd9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-749c7f87b9-59gx2" podUID="f7acec06-24e7-499d-af08-e8d75e498103" May 13 12:53:32.421194 containerd[1551]: time="2025-05-13T12:53:32.421148942Z" level=error msg="Failed to destroy network for sandbox \"d14d9df173150722ab4b3a73297237c2b4067892bf7230067f80927ae85b9dc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.425104 containerd[1551]: time="2025-05-13T12:53:32.424998928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8glxb,Uid:1d6d4ece-ad85-44ab-b7d8-e8707b058cb2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d14d9df173150722ab4b3a73297237c2b4067892bf7230067f80927ae85b9dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.425321 kubelet[2758]: E0513 12:53:32.425285 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d14d9df173150722ab4b3a73297237c2b4067892bf7230067f80927ae85b9dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.425372 kubelet[2758]: E0513 12:53:32.425340 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d14d9df173150722ab4b3a73297237c2b4067892bf7230067f80927ae85b9dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8glxb" May 13 12:53:32.425402 kubelet[2758]: E0513 12:53:32.425367 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d14d9df173150722ab4b3a73297237c2b4067892bf7230067f80927ae85b9dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8glxb" May 13 12:53:32.425431 kubelet[2758]: E0513 12:53:32.425407 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8glxb_kube-system(1d6d4ece-ad85-44ab-b7d8-e8707b058cb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8glxb_kube-system(1d6d4ece-ad85-44ab-b7d8-e8707b058cb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d14d9df173150722ab4b3a73297237c2b4067892bf7230067f80927ae85b9dc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8glxb" podUID="1d6d4ece-ad85-44ab-b7d8-e8707b058cb2" May 13 12:53:32.430696 containerd[1551]: time="2025-05-13T12:53:32.430644849Z" level=error msg="Failed to destroy network for sandbox \"5ef100ec330103d54108e43e0ce0c52f98d15d091e2dbfd7547dc132c8be14a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.432074 containerd[1551]: time="2025-05-13T12:53:32.432030186Z" level=error msg="Failed to destroy network for sandbox \"2fd87769767b9b9f37b6438774de112083a9ec746aa112e88375d0d0528889f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.432429 containerd[1551]: time="2025-05-13T12:53:32.432290046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68947865bd-bbqw2,Uid:6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef100ec330103d54108e43e0ce0c52f98d15d091e2dbfd7547dc132c8be14a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.432849 kubelet[2758]: E0513 12:53:32.432694 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef100ec330103d54108e43e0ce0c52f98d15d091e2dbfd7547dc132c8be14a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.432903 kubelet[2758]: E0513 12:53:32.432825 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef100ec330103d54108e43e0ce0c52f98d15d091e2dbfd7547dc132c8be14a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68947865bd-bbqw2" May 13 12:53:32.432903 kubelet[2758]: E0513 12:53:32.432879 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ef100ec330103d54108e43e0ce0c52f98d15d091e2dbfd7547dc132c8be14a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68947865bd-bbqw2" May 13 12:53:32.433209 kubelet[2758]: E0513 12:53:32.432926 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68947865bd-bbqw2_calico-system(6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68947865bd-bbqw2_calico-system(6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ef100ec330103d54108e43e0ce0c52f98d15d091e2dbfd7547dc132c8be14a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68947865bd-bbqw2" podUID="6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59" May 13 12:53:32.434844 containerd[1551]: time="2025-05-13T12:53:32.434425243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749c7f87b9-z9bcd,Uid:ab33d1a6-7348-4ce5-8818-fbb6db783449,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd87769767b9b9f37b6438774de112083a9ec746aa112e88375d0d0528889f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.435177 kubelet[2758]: E0513 12:53:32.435123 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd87769767b9b9f37b6438774de112083a9ec746aa112e88375d0d0528889f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.435243 kubelet[2758]: E0513 12:53:32.435185 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd87769767b9b9f37b6438774de112083a9ec746aa112e88375d0d0528889f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749c7f87b9-z9bcd" May 13 12:53:32.435243 kubelet[2758]: E0513 12:53:32.435203 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fd87769767b9b9f37b6438774de112083a9ec746aa112e88375d0d0528889f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-749c7f87b9-z9bcd" May 13 12:53:32.435357 kubelet[2758]: E0513 12:53:32.435255 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-749c7f87b9-z9bcd_calico-apiserver(ab33d1a6-7348-4ce5-8818-fbb6db783449)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-749c7f87b9-z9bcd_calico-apiserver(ab33d1a6-7348-4ce5-8818-fbb6db783449)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fd87769767b9b9f37b6438774de112083a9ec746aa112e88375d0d0528889f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-749c7f87b9-z9bcd" podUID="ab33d1a6-7348-4ce5-8818-fbb6db783449" May 13 12:53:32.454915 systemd[1]: Created slice kubepods-besteffort-pod49521272_f372_4acc_b36d_7e519fb5603a.slice - libcontainer container kubepods-besteffort-pod49521272_f372_4acc_b36d_7e519fb5603a.slice. May 13 12:53:32.457294 containerd[1551]: time="2025-05-13T12:53:32.457266991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gsp6z,Uid:49521272-f372-4acc-b36d-7e519fb5603a,Namespace:calico-system,Attempt:0,}" May 13 12:53:32.516260 containerd[1551]: time="2025-05-13T12:53:32.516156807Z" level=error msg="Failed to destroy network for sandbox \"a5c9c17c553fec7b29ebde724fc048a0a1b1a0a32c8e4851a1adb163354d2905\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.518859 containerd[1551]: time="2025-05-13T12:53:32.518736400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gsp6z,Uid:49521272-f372-4acc-b36d-7e519fb5603a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c9c17c553fec7b29ebde724fc048a0a1b1a0a32c8e4851a1adb163354d2905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.519133 kubelet[2758]: E0513 12:53:32.519104 2758 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c9c17c553fec7b29ebde724fc048a0a1b1a0a32c8e4851a1adb163354d2905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 12:53:32.519499 kubelet[2758]: E0513 12:53:32.519231 2758 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c9c17c553fec7b29ebde724fc048a0a1b1a0a32c8e4851a1adb163354d2905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gsp6z" May 13 12:53:32.519499 kubelet[2758]: E0513 12:53:32.519266 2758 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5c9c17c553fec7b29ebde724fc048a0a1b1a0a32c8e4851a1adb163354d2905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gsp6z" May 13 12:53:32.519499 kubelet[2758]: E0513 12:53:32.519328 2758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gsp6z_calico-system(49521272-f372-4acc-b36d-7e519fb5603a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gsp6z_calico-system(49521272-f372-4acc-b36d-7e519fb5603a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5c9c17c553fec7b29ebde724fc048a0a1b1a0a32c8e4851a1adb163354d2905\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gsp6z" podUID="49521272-f372-4acc-b36d-7e519fb5603a" May 13 12:53:32.668956 containerd[1551]: time="2025-05-13T12:53:32.668459732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 12:53:41.399063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888758585.mount: Deactivated successfully. May 13 12:53:41.449097 containerd[1551]: time="2025-05-13T12:53:41.449051377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:41.450841 containerd[1551]: time="2025-05-13T12:53:41.450792567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 13 12:53:41.453499 containerd[1551]: time="2025-05-13T12:53:41.452694119Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:41.455199 containerd[1551]: time="2025-05-13T12:53:41.455159924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:41.456509 containerd[1551]: time="2025-05-13T12:53:41.456114814Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.787167567s" May 13 12:53:41.456509 containerd[1551]: time="2025-05-13T12:53:41.456167351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 13 12:53:41.471637 containerd[1551]: time="2025-05-13T12:53:41.469685924Z" level=info msg="CreateContainer within sandbox \"4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 12:53:41.485498 containerd[1551]: time="2025-05-13T12:53:41.484589808Z" level=info msg="Container e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:41.496956 containerd[1551]: time="2025-05-13T12:53:41.496904048Z" level=info msg="CreateContainer within sandbox \"4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\"" May 13 12:53:41.498508 containerd[1551]: time="2025-05-13T12:53:41.497734086Z" level=info msg="StartContainer for \"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\"" May 13 12:53:41.499553 containerd[1551]: time="2025-05-13T12:53:41.499521769Z" level=info msg="connecting to shim e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c" address="unix:///run/containerd/s/01172bc7d95654aed456b2da137fc8f999d5e7da0c63501d7b604d2b6af067d0" protocol=ttrpc version=3 May 13 12:53:41.522615 systemd[1]: Started cri-containerd-e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c.scope - libcontainer container e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c. May 13 12:53:41.584021 containerd[1551]: time="2025-05-13T12:53:41.583958375Z" level=info msg="StartContainer for \"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" returns successfully" May 13 12:53:41.656943 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 12:53:41.657079 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 12:53:41.741056 kubelet[2758]: I0513 12:53:41.740949 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kbk5r" podStartSLOduration=0.854324782 podStartE2EDuration="25.740905288s" podCreationTimestamp="2025-05-13 12:53:16 +0000 UTC" firstStartedPulling="2025-05-13 12:53:16.570551875 +0000 UTC m=+12.393700075" lastFinishedPulling="2025-05-13 12:53:41.457132381 +0000 UTC m=+37.280280581" observedRunningTime="2025-05-13 12:53:41.739967623 +0000 UTC m=+37.563115833" watchObservedRunningTime="2025-05-13 12:53:41.740905288 +0000 UTC m=+37.564053509" May 13 12:53:41.825100 containerd[1551]: time="2025-05-13T12:53:41.825026887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"8223dc66f6a389a4faaa143050245f9b110adb511b3716fce3b09426061affbc\" pid:3746 exit_status:1 exited_at:{seconds:1747140821 nanos:824649916}" May 13 12:53:42.813923 containerd[1551]: time="2025-05-13T12:53:42.813699691Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"d5651a46df0125ca2d7a4c6e0f98dd2aa181493a6de65128bce9f063339d2740\" pid:3790 exit_status:1 exited_at:{seconds:1747140822 nanos:813417000}" May 13 12:53:43.666138 systemd-networkd[1444]: vxlan.calico: Link UP May 13 12:53:43.666148 systemd-networkd[1444]: vxlan.calico: Gained carrier May 13 12:53:44.450543 containerd[1551]: time="2025-05-13T12:53:44.450218269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gsp6z,Uid:49521272-f372-4acc-b36d-7e519fb5603a,Namespace:calico-system,Attempt:0,}" May 13 12:53:44.450543 containerd[1551]: time="2025-05-13T12:53:44.450384554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8glxb,Uid:1d6d4ece-ad85-44ab-b7d8-e8707b058cb2,Namespace:kube-system,Attempt:0,}" May 13 12:53:45.297521 systemd-networkd[1444]: calibadc90a73d4: Link UP May 13 12:53:45.298030 systemd-networkd[1444]: calibadc90a73d4: Gained carrier May 13 12:53:45.317500 containerd[1551]: 2025-05-13 12:53:45.132 [INFO][4002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0 coredns-668d6bf9bc- kube-system 1d6d4ece-ad85-44ab-b7d8-e8707b058cb2 719 0 2025-05-13 12:53:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999-9-100-4cb33ef211.novalocal coredns-668d6bf9bc-8glxb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibadc90a73d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8glxb" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-" May 13 12:53:45.317500 containerd[1551]: 2025-05-13 12:53:45.135 [INFO][4002] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8glxb" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0" May 13 12:53:45.317500 containerd[1551]: 2025-05-13 12:53:45.193 [INFO][4018] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" HandleID="k8s-pod-network.2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0" May 13 12:53:45.317719 containerd[1551]: 2025-05-13 12:53:45.213 [INFO][4018] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" HandleID="k8s-pod-network.2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003193c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999-9-100-4cb33ef211.novalocal", "pod":"coredns-668d6bf9bc-8glxb", "timestamp":"2025-05-13 12:53:45.19370944 +0000 UTC"}, Hostname:"ci-9999-9-100-4cb33ef211.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:53:45.317719 containerd[1551]: 2025-05-13 12:53:45.214 [INFO][4018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:53:45.317719 containerd[1551]: 2025-05-13 12:53:45.214 [INFO][4018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:53:45.317719 containerd[1551]: 2025-05-13 12:53:45.214 [INFO][4018] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-4cb33ef211.novalocal' May 13 12:53:45.317719 containerd[1551]: 2025-05-13 12:53:45.217 [INFO][4018] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.317719 containerd[1551]: 2025-05-13 12:53:45.226 [INFO][4018] ipam/ipam.go 372: Looking up existing affinities for host host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.317719 containerd[1551]: 2025-05-13 12:53:45.233 [INFO][4018] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.317719 containerd[1551]: 2025-05-13 12:53:45.236 [INFO][4018] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.317719 containerd[1551]: 2025-05-13 12:53:45.238 [INFO][4018] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.317980 containerd[1551]: 2025-05-13 12:53:45.239 [INFO][4018] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.317980 containerd[1551]: 2025-05-13 12:53:45.240 [INFO][4018] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e May 13 12:53:45.317980 containerd[1551]: 2025-05-13 12:53:45.247 [INFO][4018] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.317980 containerd[1551]: 2025-05-13 12:53:45.261 [INFO][4018] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.193/26] block=192.168.65.192/26 handle="k8s-pod-network.2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.317980 containerd[1551]: 2025-05-13 12:53:45.261 [INFO][4018] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.193/26] handle="k8s-pod-network.2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.317980 containerd[1551]: 2025-05-13 12:53:45.262 [INFO][4018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:53:45.317980 containerd[1551]: 2025-05-13 12:53:45.262 [INFO][4018] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.193/26] IPv6=[] ContainerID="2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" HandleID="k8s-pod-network.2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0" May 13 12:53:45.318164 containerd[1551]: 2025-05-13 12:53:45.265 [INFO][4002] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8glxb" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d6d4ece-ad85-44ab-b7d8-e8707b058cb2", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-8glxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibadc90a73d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:45.318164 containerd[1551]: 2025-05-13 12:53:45.265 [INFO][4002] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.193/32] ContainerID="2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8glxb" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0" May 13 12:53:45.318164 containerd[1551]: 2025-05-13 12:53:45.265 [INFO][4002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibadc90a73d4 ContainerID="2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8glxb" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0" May 13 12:53:45.318164 containerd[1551]: 2025-05-13 12:53:45.298 [INFO][4002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8glxb" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0" May 13 12:53:45.318164 containerd[1551]: 2025-05-13 12:53:45.299 [INFO][4002] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8glxb" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1d6d4ece-ad85-44ab-b7d8-e8707b058cb2", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e", Pod:"coredns-668d6bf9bc-8glxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibadc90a73d4", MAC:"2e:e6:c9:8a:89:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:45.318164 containerd[1551]: 2025-05-13 12:53:45.315 [INFO][4002] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" Namespace="kube-system" Pod="coredns-668d6bf9bc-8glxb" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--8glxb-eth0" May 13 12:53:45.389053 systemd-networkd[1444]: calidfd6a6471f0: Link UP May 13 12:53:45.389701 systemd-networkd[1444]: calidfd6a6471f0: Gained carrier May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.132 [INFO][3991] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0 csi-node-driver- calico-system 49521272-f372-4acc-b36d-7e519fb5603a 615 0 2025-05-13 12:53:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-9999-9-100-4cb33ef211.novalocal csi-node-driver-gsp6z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidfd6a6471f0 [] []}} ContainerID="9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" Namespace="calico-system" Pod="csi-node-driver-gsp6z" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.135 [INFO][3991] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" Namespace="calico-system" Pod="csi-node-driver-gsp6z" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.218 [INFO][4016] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" HandleID="k8s-pod-network.9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.231 [INFO][4016] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" HandleID="k8s-pod-network.9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032fac0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-100-4cb33ef211.novalocal", "pod":"csi-node-driver-gsp6z", "timestamp":"2025-05-13 12:53:45.218047427 +0000 UTC"}, Hostname:"ci-9999-9-100-4cb33ef211.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.231 [INFO][4016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.261 [INFO][4016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.261 [INFO][4016] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-4cb33ef211.novalocal' May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.319 [INFO][4016] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.338 [INFO][4016] ipam/ipam.go 372: Looking up existing affinities for host host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.345 [INFO][4016] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.349 [INFO][4016] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.352 [INFO][4016] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.352 [INFO][4016] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.355 [INFO][4016] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.362 [INFO][4016] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.376 [INFO][4016] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.194/26] block=192.168.65.192/26 handle="k8s-pod-network.9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.376 [INFO][4016] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.194/26] handle="k8s-pod-network.9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.377 [INFO][4016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:53:45.414370 containerd[1551]: 2025-05-13 12:53:45.377 [INFO][4016] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.194/26] IPv6=[] ContainerID="9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" HandleID="k8s-pod-network.9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0" May 13 12:53:45.415325 containerd[1551]: 2025-05-13 12:53:45.381 [INFO][3991] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" Namespace="calico-system" Pod="csi-node-driver-gsp6z" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49521272-f372-4acc-b36d-7e519fb5603a", ResourceVersion:"615", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"", Pod:"csi-node-driver-gsp6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidfd6a6471f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:45.415325 containerd[1551]: 2025-05-13 12:53:45.382 [INFO][3991] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.194/32] ContainerID="9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" Namespace="calico-system" Pod="csi-node-driver-gsp6z" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0" May 13 12:53:45.415325 containerd[1551]: 2025-05-13 12:53:45.382 [INFO][3991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfd6a6471f0 ContainerID="9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" Namespace="calico-system" Pod="csi-node-driver-gsp6z" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0" May 13 12:53:45.415325 containerd[1551]: 2025-05-13 12:53:45.389 [INFO][3991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" Namespace="calico-system" Pod="csi-node-driver-gsp6z" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0" May 13 12:53:45.415325 containerd[1551]: 2025-05-13 12:53:45.392 [INFO][3991] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" Namespace="calico-system" Pod="csi-node-driver-gsp6z" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49521272-f372-4acc-b36d-7e519fb5603a", ResourceVersion:"615", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e", Pod:"csi-node-driver-gsp6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidfd6a6471f0", MAC:"76:47:7a:8f:86:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:45.415325 containerd[1551]: 2025-05-13 12:53:45.408 [INFO][3991] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" Namespace="calico-system" Pod="csi-node-driver-gsp6z" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-csi--node--driver--gsp6z-eth0" May 13 12:53:45.439503 containerd[1551]: time="2025-05-13T12:53:45.439321178Z" level=info msg="connecting to shim 2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e" address="unix:///run/containerd/s/6fe8dc33379c2f146103db8778a24de56ac510ab3740236c9ffed48d25fc8431" namespace=k8s.io protocol=ttrpc version=3 May 13 12:53:45.450093 containerd[1551]: time="2025-05-13T12:53:45.450053110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749c7f87b9-z9bcd,Uid:ab33d1a6-7348-4ce5-8818-fbb6db783449,Namespace:calico-apiserver,Attempt:0,}" May 13 12:53:45.452591 containerd[1551]: time="2025-05-13T12:53:45.452514781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749c7f87b9-59gx2,Uid:f7acec06-24e7-499d-af08-e8d75e498103,Namespace:calico-apiserver,Attempt:0,}" May 13 12:53:45.470418 systemd-networkd[1444]: vxlan.calico: Gained IPv6LL May 13 12:53:45.487717 containerd[1551]: time="2025-05-13T12:53:45.487620673Z" level=info msg="connecting to shim 9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e" address="unix:///run/containerd/s/d16220b20bcba12e0221833faf7caf3a552a749bf164b50578aceb5526281723" namespace=k8s.io protocol=ttrpc version=3 May 13 12:53:45.552737 systemd[1]: Started cri-containerd-2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e.scope - libcontainer container 2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e. May 13 12:53:45.562617 systemd[1]: Started cri-containerd-9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e.scope - libcontainer container 9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e. May 13 12:53:45.690216 containerd[1551]: time="2025-05-13T12:53:45.690180703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gsp6z,Uid:49521272-f372-4acc-b36d-7e519fb5603a,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e\"" May 13 12:53:45.696234 containerd[1551]: time="2025-05-13T12:53:45.696169725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 12:53:45.713236 containerd[1551]: time="2025-05-13T12:53:45.713202947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8glxb,Uid:1d6d4ece-ad85-44ab-b7d8-e8707b058cb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e\"" May 13 12:53:45.722036 containerd[1551]: time="2025-05-13T12:53:45.722003336Z" level=info msg="CreateContainer within sandbox \"2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:53:45.746283 systemd-networkd[1444]: cali89ad5b92290: Link UP May 13 12:53:45.748054 systemd-networkd[1444]: cali89ad5b92290: Gained carrier May 13 12:53:45.760010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1595721890.mount: Deactivated successfully. May 13 12:53:45.761954 containerd[1551]: time="2025-05-13T12:53:45.761926076Z" level=info msg="Container 9f214cb021f5d83cf7fb6b12c9bb578704190f9eed7f48cc461b42180d0c7fca: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.573 [INFO][4085] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0 calico-apiserver-749c7f87b9- calico-apiserver f7acec06-24e7-499d-af08-e8d75e498103 716 0 2025-05-13 12:53:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:749c7f87b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999-9-100-4cb33ef211.novalocal calico-apiserver-749c7f87b9-59gx2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89ad5b92290 [] []}} ContainerID="ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-59gx2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.573 [INFO][4085] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-59gx2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.651 [INFO][4165] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" HandleID="k8s-pod-network.ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.666 [INFO][4165] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" HandleID="k8s-pod-network.ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999-9-100-4cb33ef211.novalocal", "pod":"calico-apiserver-749c7f87b9-59gx2", "timestamp":"2025-05-13 12:53:45.651190242 +0000 UTC"}, Hostname:"ci-9999-9-100-4cb33ef211.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.666 [INFO][4165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.666 [INFO][4165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.666 [INFO][4165] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-4cb33ef211.novalocal' May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.669 [INFO][4165] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.685 [INFO][4165] ipam/ipam.go 372: Looking up existing affinities for host host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.698 [INFO][4165] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.702 [INFO][4165] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.712 [INFO][4165] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.712 [INFO][4165] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.716 [INFO][4165] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7 May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.727 [INFO][4165] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.737 [INFO][4165] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.195/26] block=192.168.65.192/26 handle="k8s-pod-network.ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.737 [INFO][4165] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.195/26] handle="k8s-pod-network.ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.737 [INFO][4165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:53:45.780321 containerd[1551]: 2025-05-13 12:53:45.737 [INFO][4165] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.195/26] IPv6=[] ContainerID="ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" HandleID="k8s-pod-network.ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0" May 13 12:53:45.781326 containerd[1551]: 2025-05-13 12:53:45.740 [INFO][4085] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-59gx2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0", GenerateName:"calico-apiserver-749c7f87b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7acec06-24e7-499d-af08-e8d75e498103", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749c7f87b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"", Pod:"calico-apiserver-749c7f87b9-59gx2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89ad5b92290", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:45.781326 containerd[1551]: 2025-05-13 12:53:45.740 [INFO][4085] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.195/32] ContainerID="ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-59gx2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0" May 13 12:53:45.781326 containerd[1551]: 2025-05-13 12:53:45.741 [INFO][4085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89ad5b92290 ContainerID="ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-59gx2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0" May 13 12:53:45.781326 containerd[1551]: 2025-05-13 12:53:45.749 [INFO][4085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-59gx2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0" May 13 12:53:45.781326 containerd[1551]: 2025-05-13 12:53:45.759 [INFO][4085] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-59gx2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0", GenerateName:"calico-apiserver-749c7f87b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7acec06-24e7-499d-af08-e8d75e498103", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749c7f87b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7", Pod:"calico-apiserver-749c7f87b9-59gx2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89ad5b92290", MAC:"1e:a3:a4:fd:61:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:45.781326 containerd[1551]: 2025-05-13 12:53:45.776 [INFO][4085] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-59gx2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--59gx2-eth0" May 13 12:53:45.784694 containerd[1551]: time="2025-05-13T12:53:45.784610420Z" level=info msg="CreateContainer within sandbox \"2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f214cb021f5d83cf7fb6b12c9bb578704190f9eed7f48cc461b42180d0c7fca\"" May 13 12:53:45.786895 containerd[1551]: time="2025-05-13T12:53:45.786536663Z" level=info msg="StartContainer for \"9f214cb021f5d83cf7fb6b12c9bb578704190f9eed7f48cc461b42180d0c7fca\"" May 13 12:53:45.788500 containerd[1551]: time="2025-05-13T12:53:45.788368926Z" level=info msg="connecting to shim 9f214cb021f5d83cf7fb6b12c9bb578704190f9eed7f48cc461b42180d0c7fca" address="unix:///run/containerd/s/6fe8dc33379c2f146103db8778a24de56ac510ab3740236c9ffed48d25fc8431" protocol=ttrpc version=3 May 13 12:53:45.825378 systemd[1]: Started cri-containerd-9f214cb021f5d83cf7fb6b12c9bb578704190f9eed7f48cc461b42180d0c7fca.scope - libcontainer container 9f214cb021f5d83cf7fb6b12c9bb578704190f9eed7f48cc461b42180d0c7fca. May 13 12:53:45.842634 containerd[1551]: time="2025-05-13T12:53:45.842548864Z" level=info msg="connecting to shim ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7" address="unix:///run/containerd/s/b06d7fa543fc1419fbb7baaec17ccde02e9e3a51dd2c7e224ecc2797e61234e3" namespace=k8s.io protocol=ttrpc version=3 May 13 12:53:45.877228 systemd-networkd[1444]: cali043d47b75b9: Link UP May 13 12:53:45.878312 systemd-networkd[1444]: cali043d47b75b9: Gained carrier May 13 12:53:45.879902 systemd[1]: Started cri-containerd-ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7.scope - libcontainer container ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7. May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.569 [INFO][4079] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0 calico-apiserver-749c7f87b9- calico-apiserver ab33d1a6-7348-4ce5-8818-fbb6db783449 718 0 2025-05-13 12:53:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:749c7f87b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999-9-100-4cb33ef211.novalocal calico-apiserver-749c7f87b9-z9bcd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali043d47b75b9 [] []}} ContainerID="106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-z9bcd" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.570 [INFO][4079] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-z9bcd" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.647 [INFO][4155] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" HandleID="k8s-pod-network.106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.692 [INFO][4155] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" HandleID="k8s-pod-network.106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999-9-100-4cb33ef211.novalocal", "pod":"calico-apiserver-749c7f87b9-z9bcd", "timestamp":"2025-05-13 12:53:45.647294429 +0000 UTC"}, Hostname:"ci-9999-9-100-4cb33ef211.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.692 [INFO][4155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.737 [INFO][4155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.737 [INFO][4155] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-4cb33ef211.novalocal' May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.773 [INFO][4155] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.785 [INFO][4155] ipam/ipam.go 372: Looking up existing affinities for host host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.798 [INFO][4155] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.803 [INFO][4155] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.807 [INFO][4155] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.807 [INFO][4155] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.815 [INFO][4155] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49 May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.823 [INFO][4155] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.849 [INFO][4155] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.196/26] block=192.168.65.192/26 handle="k8s-pod-network.106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.849 [INFO][4155] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.196/26] handle="k8s-pod-network.106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.850 [INFO][4155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:53:45.912601 containerd[1551]: 2025-05-13 12:53:45.850 [INFO][4155] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.196/26] IPv6=[] ContainerID="106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" HandleID="k8s-pod-network.106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0" May 13 12:53:45.913800 containerd[1551]: 2025-05-13 12:53:45.857 [INFO][4079] cni-plugin/k8s.go 386: Populated endpoint ContainerID="106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-z9bcd" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0", GenerateName:"calico-apiserver-749c7f87b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab33d1a6-7348-4ce5-8818-fbb6db783449", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749c7f87b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"", Pod:"calico-apiserver-749c7f87b9-z9bcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali043d47b75b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:45.913800 containerd[1551]: 2025-05-13 12:53:45.858 [INFO][4079] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.196/32] ContainerID="106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-z9bcd" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0" May 13 12:53:45.913800 containerd[1551]: 2025-05-13 12:53:45.858 [INFO][4079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali043d47b75b9 ContainerID="106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-z9bcd" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0" May 13 12:53:45.913800 containerd[1551]: 2025-05-13 12:53:45.877 [INFO][4079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-z9bcd" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0" May 13 12:53:45.913800 containerd[1551]: 2025-05-13 12:53:45.878 [INFO][4079] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-z9bcd" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0", GenerateName:"calico-apiserver-749c7f87b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab33d1a6-7348-4ce5-8818-fbb6db783449", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"749c7f87b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49", Pod:"calico-apiserver-749c7f87b9-z9bcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali043d47b75b9", MAC:"4e:b9:52:4a:46:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:45.913800 containerd[1551]: 2025-05-13 12:53:45.902 [INFO][4079] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" Namespace="calico-apiserver" Pod="calico-apiserver-749c7f87b9-z9bcd" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--apiserver--749c7f87b9--z9bcd-eth0" May 13 12:53:45.916317 containerd[1551]: time="2025-05-13T12:53:45.916230961Z" level=info msg="StartContainer for \"9f214cb021f5d83cf7fb6b12c9bb578704190f9eed7f48cc461b42180d0c7fca\" returns successfully" May 13 12:53:45.969760 containerd[1551]: time="2025-05-13T12:53:45.969667731Z" level=info msg="connecting to shim 106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49" address="unix:///run/containerd/s/7b71892d888d4ddf25c996125d7ee1cb8f42abdd8311ee2f992835d85ee450cb" namespace=k8s.io protocol=ttrpc version=3 May 13 12:53:46.009716 systemd[1]: Started cri-containerd-106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49.scope - libcontainer container 106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49. May 13 12:53:46.079797 containerd[1551]: time="2025-05-13T12:53:46.078836355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749c7f87b9-z9bcd,Uid:ab33d1a6-7348-4ce5-8818-fbb6db783449,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49\"" May 13 12:53:46.097779 containerd[1551]: time="2025-05-13T12:53:46.097742942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-749c7f87b9-59gx2,Uid:f7acec06-24e7-499d-af08-e8d75e498103,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7\"" May 13 12:53:46.450415 containerd[1551]: time="2025-05-13T12:53:46.449853617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68947865bd-bbqw2,Uid:6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59,Namespace:calico-system,Attempt:0,}" May 13 12:53:46.622183 systemd-networkd[1444]: calie01fe781722: Link UP May 13 12:53:46.623968 systemd-networkd[1444]: calie01fe781722: Gained carrier May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.536 [INFO][4341] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0 calico-kube-controllers-68947865bd- calico-system 6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59 717 0 2025-05-13 12:53:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68947865bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-9999-9-100-4cb33ef211.novalocal calico-kube-controllers-68947865bd-bbqw2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie01fe781722 [] []}} ContainerID="1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" Namespace="calico-system" Pod="calico-kube-controllers-68947865bd-bbqw2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.536 [INFO][4341] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" Namespace="calico-system" Pod="calico-kube-controllers-68947865bd-bbqw2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.568 [INFO][4352] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" HandleID="k8s-pod-network.1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.580 [INFO][4352] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" HandleID="k8s-pod-network.1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b690), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-100-4cb33ef211.novalocal", "pod":"calico-kube-controllers-68947865bd-bbqw2", "timestamp":"2025-05-13 12:53:46.568604127 +0000 UTC"}, Hostname:"ci-9999-9-100-4cb33ef211.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.580 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.581 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.581 [INFO][4352] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-4cb33ef211.novalocal' May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.584 [INFO][4352] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.588 [INFO][4352] ipam/ipam.go 372: Looking up existing affinities for host host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.593 [INFO][4352] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.595 [INFO][4352] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.597 [INFO][4352] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.597 [INFO][4352] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.600 [INFO][4352] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0 May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.606 [INFO][4352] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.614 [INFO][4352] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.197/26] block=192.168.65.192/26 handle="k8s-pod-network.1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.615 [INFO][4352] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.197/26] handle="k8s-pod-network.1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.615 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:53:46.645350 containerd[1551]: 2025-05-13 12:53:46.615 [INFO][4352] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.197/26] IPv6=[] ContainerID="1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" HandleID="k8s-pod-network.1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0" May 13 12:53:46.646249 containerd[1551]: 2025-05-13 12:53:46.617 [INFO][4341] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" Namespace="calico-system" Pod="calico-kube-controllers-68947865bd-bbqw2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0", GenerateName:"calico-kube-controllers-68947865bd-", Namespace:"calico-system", SelfLink:"", UID:"6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68947865bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"", Pod:"calico-kube-controllers-68947865bd-bbqw2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie01fe781722", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:46.646249 containerd[1551]: 2025-05-13 12:53:46.617 [INFO][4341] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.197/32] ContainerID="1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" Namespace="calico-system" Pod="calico-kube-controllers-68947865bd-bbqw2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0" May 13 12:53:46.646249 containerd[1551]: 2025-05-13 12:53:46.617 [INFO][4341] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie01fe781722 ContainerID="1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" Namespace="calico-system" Pod="calico-kube-controllers-68947865bd-bbqw2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0" May 13 12:53:46.646249 containerd[1551]: 2025-05-13 12:53:46.622 [INFO][4341] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" Namespace="calico-system" Pod="calico-kube-controllers-68947865bd-bbqw2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0" May 13 12:53:46.646249 containerd[1551]: 2025-05-13 12:53:46.624 [INFO][4341] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" Namespace="calico-system" Pod="calico-kube-controllers-68947865bd-bbqw2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0", GenerateName:"calico-kube-controllers-68947865bd-", Namespace:"calico-system", SelfLink:"", UID:"6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68947865bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0", Pod:"calico-kube-controllers-68947865bd-bbqw2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie01fe781722", MAC:"52:11:32:0b:e7:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:46.646249 containerd[1551]: 2025-05-13 12:53:46.643 [INFO][4341] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" Namespace="calico-system" Pod="calico-kube-controllers-68947865bd-bbqw2" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-calico--kube--controllers--68947865bd--bbqw2-eth0" May 13 12:53:46.690671 containerd[1551]: time="2025-05-13T12:53:46.690609232Z" level=info msg="connecting to shim 1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0" address="unix:///run/containerd/s/c28786b6d49f2e600411872f07fc35433297bb4e720f5872a16fbe27cae162db" namespace=k8s.io protocol=ttrpc version=3 May 13 12:53:46.723185 systemd[1]: Started cri-containerd-1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0.scope - libcontainer container 1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0. May 13 12:53:46.773441 kubelet[2758]: I0513 12:53:46.773024 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8glxb" podStartSLOduration=37.773005848 podStartE2EDuration="37.773005848s" podCreationTimestamp="2025-05-13 12:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:53:46.753241082 +0000 UTC m=+42.576389292" watchObservedRunningTime="2025-05-13 12:53:46.773005848 +0000 UTC m=+42.596154058" May 13 12:53:46.870552 containerd[1551]: time="2025-05-13T12:53:46.870201865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68947865bd-bbqw2,Uid:6e68020b-8e9f-4fd5-b1d5-7adff9fb7f59,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0\"" May 13 12:53:47.004649 systemd-networkd[1444]: cali043d47b75b9: Gained IPv6LL May 13 12:53:47.005319 systemd-networkd[1444]: calibadc90a73d4: Gained IPv6LL May 13 12:53:47.388851 systemd-networkd[1444]: calidfd6a6471f0: Gained IPv6LL May 13 12:53:47.449077 containerd[1551]: time="2025-05-13T12:53:47.448862794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tvf49,Uid:51adfee0-0257-46dd-818d-dc755eed294c,Namespace:kube-system,Attempt:0,}" May 13 12:53:47.582736 systemd-networkd[1444]: cali89ad5b92290: Gained IPv6LL May 13 12:53:47.660992 systemd-networkd[1444]: cali29e991ba754: Link UP May 13 12:53:47.662394 systemd-networkd[1444]: cali29e991ba754: Gained carrier May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.552 [INFO][4425] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0 coredns-668d6bf9bc- kube-system 51adfee0-0257-46dd-818d-dc755eed294c 712 0 2025-05-13 12:53:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999-9-100-4cb33ef211.novalocal coredns-668d6bf9bc-tvf49 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali29e991ba754 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" Namespace="kube-system" Pod="coredns-668d6bf9bc-tvf49" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.552 [INFO][4425] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" Namespace="kube-system" Pod="coredns-668d6bf9bc-tvf49" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.607 [INFO][4437] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" HandleID="k8s-pod-network.374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.618 [INFO][4437] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" HandleID="k8s-pod-network.374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fc0a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999-9-100-4cb33ef211.novalocal", "pod":"coredns-668d6bf9bc-tvf49", "timestamp":"2025-05-13 12:53:47.607497082 +0000 UTC"}, Hostname:"ci-9999-9-100-4cb33ef211.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.618 [INFO][4437] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.618 [INFO][4437] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.618 [INFO][4437] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-100-4cb33ef211.novalocal' May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.621 [INFO][4437] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.625 [INFO][4437] ipam/ipam.go 372: Looking up existing affinities for host host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.630 [INFO][4437] ipam/ipam.go 489: Trying affinity for 192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.632 [INFO][4437] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.635 [INFO][4437] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.192/26 host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.635 [INFO][4437] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.192/26 handle="k8s-pod-network.374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.637 [INFO][4437] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211 May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.648 [INFO][4437] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.192/26 handle="k8s-pod-network.374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.656 [INFO][4437] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.198/26] block=192.168.65.192/26 handle="k8s-pod-network.374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.656 [INFO][4437] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.198/26] handle="k8s-pod-network.374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" host="ci-9999-9-100-4cb33ef211.novalocal" May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.656 [INFO][4437] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 12:53:47.678745 containerd[1551]: 2025-05-13 12:53:47.656 [INFO][4437] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.198/26] IPv6=[] ContainerID="374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" HandleID="k8s-pod-network.374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" Workload="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0" May 13 12:53:47.686992 containerd[1551]: 2025-05-13 12:53:47.658 [INFO][4425] cni-plugin/k8s.go 386: Populated endpoint ContainerID="374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" Namespace="kube-system" Pod="coredns-668d6bf9bc-tvf49" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"51adfee0-0257-46dd-818d-dc755eed294c", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"", Pod:"coredns-668d6bf9bc-tvf49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29e991ba754", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:47.686992 containerd[1551]: 2025-05-13 12:53:47.658 [INFO][4425] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.198/32] ContainerID="374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" Namespace="kube-system" Pod="coredns-668d6bf9bc-tvf49" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0" May 13 12:53:47.686992 containerd[1551]: 2025-05-13 12:53:47.658 [INFO][4425] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29e991ba754 ContainerID="374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" Namespace="kube-system" Pod="coredns-668d6bf9bc-tvf49" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0" May 13 12:53:47.686992 containerd[1551]: 2025-05-13 12:53:47.662 [INFO][4425] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" Namespace="kube-system" Pod="coredns-668d6bf9bc-tvf49" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0" May 13 12:53:47.686992 containerd[1551]: 2025-05-13 12:53:47.663 [INFO][4425] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" Namespace="kube-system" Pod="coredns-668d6bf9bc-tvf49" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"51adfee0-0257-46dd-818d-dc755eed294c", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 12, 53, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-100-4cb33ef211.novalocal", ContainerID:"374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211", Pod:"coredns-668d6bf9bc-tvf49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29e991ba754", MAC:"3a:01:a1:4b:22:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 12:53:47.686992 containerd[1551]: 2025-05-13 12:53:47.675 [INFO][4425] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" Namespace="kube-system" Pod="coredns-668d6bf9bc-tvf49" WorkloadEndpoint="ci--9999--9--100--4cb33ef211.novalocal-k8s-coredns--668d6bf9bc--tvf49-eth0" May 13 12:53:47.745348 containerd[1551]: time="2025-05-13T12:53:47.745191044Z" level=info msg="connecting to shim 374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211" address="unix:///run/containerd/s/ff676e9a7354122f648ee3d82e9f5cdfc78457038a154b070b10995880760f92" namespace=k8s.io protocol=ttrpc version=3 May 13 12:53:47.793667 systemd[1]: Started cri-containerd-374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211.scope - libcontainer container 374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211. May 13 12:53:47.910684 containerd[1551]: time="2025-05-13T12:53:47.910634390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tvf49,Uid:51adfee0-0257-46dd-818d-dc755eed294c,Namespace:kube-system,Attempt:0,} returns sandbox id \"374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211\"" May 13 12:53:47.915239 containerd[1551]: time="2025-05-13T12:53:47.915129783Z" level=info msg="CreateContainer within sandbox \"374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:53:47.938883 containerd[1551]: time="2025-05-13T12:53:47.938839811Z" level=info msg="Container dc45baf005f4d821de97ed723314e7d99c8a147ba653ab16f83a9de06645769c: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:47.980676 containerd[1551]: time="2025-05-13T12:53:47.980601832Z" level=info msg="CreateContainer within sandbox \"374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dc45baf005f4d821de97ed723314e7d99c8a147ba653ab16f83a9de06645769c\"" May 13 12:53:47.981622 containerd[1551]: time="2025-05-13T12:53:47.981375268Z" level=info msg="StartContainer for \"dc45baf005f4d821de97ed723314e7d99c8a147ba653ab16f83a9de06645769c\"" May 13 12:53:47.983195 containerd[1551]: time="2025-05-13T12:53:47.983164615Z" level=info msg="connecting to shim dc45baf005f4d821de97ed723314e7d99c8a147ba653ab16f83a9de06645769c" address="unix:///run/containerd/s/ff676e9a7354122f648ee3d82e9f5cdfc78457038a154b070b10995880760f92" protocol=ttrpc version=3 May 13 12:53:48.008864 systemd[1]: Started cri-containerd-dc45baf005f4d821de97ed723314e7d99c8a147ba653ab16f83a9de06645769c.scope - libcontainer container dc45baf005f4d821de97ed723314e7d99c8a147ba653ab16f83a9de06645769c. May 13 12:53:48.061826 containerd[1551]: time="2025-05-13T12:53:48.061794552Z" level=info msg="StartContainer for \"dc45baf005f4d821de97ed723314e7d99c8a147ba653ab16f83a9de06645769c\" returns successfully" May 13 12:53:48.211075 containerd[1551]: time="2025-05-13T12:53:48.210987938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:48.212455 containerd[1551]: time="2025-05-13T12:53:48.212429688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 13 12:53:48.213926 containerd[1551]: time="2025-05-13T12:53:48.213880795Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:48.216345 containerd[1551]: time="2025-05-13T12:53:48.216301468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:48.217123 containerd[1551]: time="2025-05-13T12:53:48.216988479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.520591219s" May 13 12:53:48.217123 containerd[1551]: time="2025-05-13T12:53:48.217028320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 13 12:53:48.219496 containerd[1551]: time="2025-05-13T12:53:48.219048237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 12:53:48.220833 containerd[1551]: time="2025-05-13T12:53:48.220624223Z" level=info msg="CreateContainer within sandbox \"9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 12:53:48.236438 containerd[1551]: time="2025-05-13T12:53:48.236397950Z" level=info msg="Container fc07ad63684b96333ae5f047a79864751140bd2bad66838cc9ae9de1356b93e5: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:48.258242 containerd[1551]: time="2025-05-13T12:53:48.258206007Z" level=info msg="CreateContainer within sandbox \"9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fc07ad63684b96333ae5f047a79864751140bd2bad66838cc9ae9de1356b93e5\"" May 13 12:53:48.258710 containerd[1551]: time="2025-05-13T12:53:48.258687235Z" level=info msg="StartContainer for \"fc07ad63684b96333ae5f047a79864751140bd2bad66838cc9ae9de1356b93e5\"" May 13 12:53:48.260611 containerd[1551]: time="2025-05-13T12:53:48.260581751Z" level=info msg="connecting to shim fc07ad63684b96333ae5f047a79864751140bd2bad66838cc9ae9de1356b93e5" address="unix:///run/containerd/s/d16220b20bcba12e0221833faf7caf3a552a749bf164b50578aceb5526281723" protocol=ttrpc version=3 May 13 12:53:48.280636 systemd[1]: Started cri-containerd-fc07ad63684b96333ae5f047a79864751140bd2bad66838cc9ae9de1356b93e5.scope - libcontainer container fc07ad63684b96333ae5f047a79864751140bd2bad66838cc9ae9de1356b93e5. May 13 12:53:48.328660 containerd[1551]: time="2025-05-13T12:53:48.328618614Z" level=info msg="StartContainer for \"fc07ad63684b96333ae5f047a79864751140bd2bad66838cc9ae9de1356b93e5\" returns successfully" May 13 12:53:48.541734 systemd-networkd[1444]: calie01fe781722: Gained IPv6LL May 13 12:53:48.799376 kubelet[2758]: I0513 12:53:48.799118 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tvf49" podStartSLOduration=39.799086529 podStartE2EDuration="39.799086529s" podCreationTimestamp="2025-05-13 12:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:53:48.794389705 +0000 UTC m=+44.617537985" watchObservedRunningTime="2025-05-13 12:53:48.799086529 +0000 UTC m=+44.622234809" May 13 12:53:49.181093 systemd-networkd[1444]: cali29e991ba754: Gained IPv6LL May 13 12:53:53.295142 containerd[1551]: time="2025-05-13T12:53:53.294507367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:53.297094 containerd[1551]: time="2025-05-13T12:53:53.297074328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 13 12:53:53.298782 containerd[1551]: time="2025-05-13T12:53:53.298761662Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:53.301241 containerd[1551]: time="2025-05-13T12:53:53.301219327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:53.301931 containerd[1551]: time="2025-05-13T12:53:53.301888386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 5.082791503s" May 13 12:53:53.301981 containerd[1551]: time="2025-05-13T12:53:53.301929540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 12:53:53.304151 containerd[1551]: time="2025-05-13T12:53:53.303835966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 12:53:53.305248 containerd[1551]: time="2025-05-13T12:53:53.305202836Z" level=info msg="CreateContainer within sandbox \"106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 12:53:53.318498 containerd[1551]: time="2025-05-13T12:53:53.318146255Z" level=info msg="Container 6db20dadfa3f75974145a7432eb3dca184c357f2f992e7f2a974a8dc4dfc4fac: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:53.328017 containerd[1551]: time="2025-05-13T12:53:53.327990558Z" level=info msg="CreateContainer within sandbox \"106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6db20dadfa3f75974145a7432eb3dca184c357f2f992e7f2a974a8dc4dfc4fac\"" May 13 12:53:53.328725 containerd[1551]: time="2025-05-13T12:53:53.328664596Z" level=info msg="StartContainer for \"6db20dadfa3f75974145a7432eb3dca184c357f2f992e7f2a974a8dc4dfc4fac\"" May 13 12:53:53.329900 containerd[1551]: time="2025-05-13T12:53:53.329861040Z" level=info msg="connecting to shim 6db20dadfa3f75974145a7432eb3dca184c357f2f992e7f2a974a8dc4dfc4fac" address="unix:///run/containerd/s/7b71892d888d4ddf25c996125d7ee1cb8f42abdd8311ee2f992835d85ee450cb" protocol=ttrpc version=3 May 13 12:53:53.358627 systemd[1]: Started cri-containerd-6db20dadfa3f75974145a7432eb3dca184c357f2f992e7f2a974a8dc4dfc4fac.scope - libcontainer container 6db20dadfa3f75974145a7432eb3dca184c357f2f992e7f2a974a8dc4dfc4fac. May 13 12:53:53.419416 containerd[1551]: time="2025-05-13T12:53:53.419319333Z" level=info msg="StartContainer for \"6db20dadfa3f75974145a7432eb3dca184c357f2f992e7f2a974a8dc4dfc4fac\" returns successfully" May 13 12:53:53.823180 kubelet[2758]: I0513 12:53:53.822619 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-749c7f87b9-z9bcd" podStartSLOduration=31.601423087 podStartE2EDuration="38.822585543s" podCreationTimestamp="2025-05-13 12:53:15 +0000 UTC" firstStartedPulling="2025-05-13 12:53:46.081935458 +0000 UTC m=+41.905083658" lastFinishedPulling="2025-05-13 12:53:53.303097914 +0000 UTC m=+49.126246114" observedRunningTime="2025-05-13 12:53:53.818007108 +0000 UTC m=+49.641155358" watchObservedRunningTime="2025-05-13 12:53:53.822585543 +0000 UTC m=+49.645733864" May 13 12:53:53.846233 containerd[1551]: time="2025-05-13T12:53:53.845449411Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:53.848424 containerd[1551]: time="2025-05-13T12:53:53.848348127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 13 12:53:53.855945 containerd[1551]: time="2025-05-13T12:53:53.855881338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 551.913916ms" May 13 12:53:53.856198 containerd[1551]: time="2025-05-13T12:53:53.856154328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 13 12:53:53.858177 containerd[1551]: time="2025-05-13T12:53:53.858008631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 13 12:53:53.862141 containerd[1551]: time="2025-05-13T12:53:53.862061294Z" level=info msg="CreateContainer within sandbox \"ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 12:53:53.883463 containerd[1551]: time="2025-05-13T12:53:53.883393257Z" level=info msg="Container ed606ff06215d70015d4c07852602badb8f11a6ec5be76ec817f4807a8c1eb99: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:53.898347 containerd[1551]: time="2025-05-13T12:53:53.898294174Z" level=info msg="CreateContainer within sandbox \"ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ed606ff06215d70015d4c07852602badb8f11a6ec5be76ec817f4807a8c1eb99\"" May 13 12:53:53.899182 containerd[1551]: time="2025-05-13T12:53:53.899149446Z" level=info msg="StartContainer for \"ed606ff06215d70015d4c07852602badb8f11a6ec5be76ec817f4807a8c1eb99\"" May 13 12:53:53.900346 containerd[1551]: time="2025-05-13T12:53:53.900295410Z" level=info msg="connecting to shim ed606ff06215d70015d4c07852602badb8f11a6ec5be76ec817f4807a8c1eb99" address="unix:///run/containerd/s/b06d7fa543fc1419fbb7baaec17ccde02e9e3a51dd2c7e224ecc2797e61234e3" protocol=ttrpc version=3 May 13 12:53:53.925757 systemd[1]: Started cri-containerd-ed606ff06215d70015d4c07852602badb8f11a6ec5be76ec817f4807a8c1eb99.scope - libcontainer container ed606ff06215d70015d4c07852602badb8f11a6ec5be76ec817f4807a8c1eb99. May 13 12:53:53.987985 containerd[1551]: time="2025-05-13T12:53:53.987953186Z" level=info msg="StartContainer for \"ed606ff06215d70015d4c07852602badb8f11a6ec5be76ec817f4807a8c1eb99\" returns successfully" May 13 12:53:54.812268 kubelet[2758]: I0513 12:53:54.812105 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-749c7f87b9-59gx2" podStartSLOduration=32.053204266 podStartE2EDuration="39.812086886s" podCreationTimestamp="2025-05-13 12:53:15 +0000 UTC" firstStartedPulling="2025-05-13 12:53:46.099005405 +0000 UTC m=+41.922153605" lastFinishedPulling="2025-05-13 12:53:53.857887974 +0000 UTC m=+49.681036225" observedRunningTime="2025-05-13 12:53:54.811709187 +0000 UTC m=+50.634857387" watchObservedRunningTime="2025-05-13 12:53:54.812086886 +0000 UTC m=+50.635235096" May 13 12:53:55.800747 kubelet[2758]: I0513 12:53:55.800692 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:53:58.465416 containerd[1551]: time="2025-05-13T12:53:58.465310017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:58.466524 containerd[1551]: time="2025-05-13T12:53:58.466290710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 13 12:53:58.468234 containerd[1551]: time="2025-05-13T12:53:58.468186027Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:58.470742 containerd[1551]: time="2025-05-13T12:53:58.470699357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:53:58.471406 containerd[1551]: time="2025-05-13T12:53:58.471277158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 4.613239826s" May 13 12:53:58.471406 containerd[1551]: time="2025-05-13T12:53:58.471324503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 13 12:53:58.472452 containerd[1551]: time="2025-05-13T12:53:58.472418744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 12:53:58.485788 containerd[1551]: time="2025-05-13T12:53:58.485707729Z" level=info msg="CreateContainer within sandbox \"1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 13 12:53:58.502720 containerd[1551]: time="2025-05-13T12:53:58.502684320Z" level=info msg="Container 1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d: CDI devices from CRI Config.CDIDevices: []" May 13 12:53:58.507865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362607840.mount: Deactivated successfully. May 13 12:53:58.516866 containerd[1551]: time="2025-05-13T12:53:58.516836144Z" level=info msg="CreateContainer within sandbox \"1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\"" May 13 12:53:58.518588 containerd[1551]: time="2025-05-13T12:53:58.518558876Z" level=info msg="StartContainer for \"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\"" May 13 12:53:58.520035 containerd[1551]: time="2025-05-13T12:53:58.520013993Z" level=info msg="connecting to shim 1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d" address="unix:///run/containerd/s/c28786b6d49f2e600411872f07fc35433297bb4e720f5872a16fbe27cae162db" protocol=ttrpc version=3 May 13 12:53:58.545656 systemd[1]: Started cri-containerd-1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d.scope - libcontainer container 1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d. May 13 12:53:58.608691 containerd[1551]: time="2025-05-13T12:53:58.608642840Z" level=info msg="StartContainer for \"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" returns successfully" May 13 12:53:58.864799 kubelet[2758]: I0513 12:53:58.864284 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68947865bd-bbqw2" podStartSLOduration=31.265774583 podStartE2EDuration="42.864247564s" podCreationTimestamp="2025-05-13 12:53:16 +0000 UTC" firstStartedPulling="2025-05-13 12:53:46.873709222 +0000 UTC m=+42.696857422" lastFinishedPulling="2025-05-13 12:53:58.472182203 +0000 UTC m=+54.295330403" observedRunningTime="2025-05-13 12:53:58.857049084 +0000 UTC m=+54.680197334" watchObservedRunningTime="2025-05-13 12:53:58.864247564 +0000 UTC m=+54.687395854" May 13 12:53:58.926966 containerd[1551]: time="2025-05-13T12:53:58.926919399Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"ffcee327b871709b31229ee805554dfd5cd3b0beebd1c857fe2c275e575667a8\" pid:4719 exited_at:{seconds:1747140838 nanos:926310502}" May 13 12:54:00.915692 containerd[1551]: time="2025-05-13T12:54:00.915637452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:00.917149 containerd[1551]: time="2025-05-13T12:54:00.916931327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 13 12:54:00.918458 containerd[1551]: time="2025-05-13T12:54:00.918429495Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:00.921166 containerd[1551]: time="2025-05-13T12:54:00.921141933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:54:00.921991 containerd[1551]: time="2025-05-13T12:54:00.921957334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.44950526s" May 13 12:54:00.922040 containerd[1551]: time="2025-05-13T12:54:00.921989853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 13 12:54:00.924808 containerd[1551]: time="2025-05-13T12:54:00.924777188Z" level=info msg="CreateContainer within sandbox \"9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 12:54:01.130070 containerd[1551]: time="2025-05-13T12:54:01.129905557Z" level=info msg="Container b308af4d0d8e867e692928a696b01b1859a14586203c1b10ee3b7b00955c989a: CDI devices from CRI Config.CDIDevices: []" May 13 12:54:01.156845 containerd[1551]: time="2025-05-13T12:54:01.156730938Z" level=info msg="CreateContainer within sandbox \"9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b308af4d0d8e867e692928a696b01b1859a14586203c1b10ee3b7b00955c989a\"" May 13 12:54:01.158168 containerd[1551]: time="2025-05-13T12:54:01.158100387Z" level=info msg="StartContainer for \"b308af4d0d8e867e692928a696b01b1859a14586203c1b10ee3b7b00955c989a\"" May 13 12:54:01.162217 containerd[1551]: time="2025-05-13T12:54:01.162091941Z" level=info msg="connecting to shim b308af4d0d8e867e692928a696b01b1859a14586203c1b10ee3b7b00955c989a" address="unix:///run/containerd/s/d16220b20bcba12e0221833faf7caf3a552a749bf164b50578aceb5526281723" protocol=ttrpc version=3 May 13 12:54:01.219825 systemd[1]: Started cri-containerd-b308af4d0d8e867e692928a696b01b1859a14586203c1b10ee3b7b00955c989a.scope - libcontainer container b308af4d0d8e867e692928a696b01b1859a14586203c1b10ee3b7b00955c989a. May 13 12:54:01.276873 containerd[1551]: time="2025-05-13T12:54:01.276803083Z" level=info msg="StartContainer for \"b308af4d0d8e867e692928a696b01b1859a14586203c1b10ee3b7b00955c989a\" returns successfully" May 13 12:54:01.594157 kubelet[2758]: I0513 12:54:01.594053 2758 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 12:54:01.594157 kubelet[2758]: I0513 12:54:01.594117 2758 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 12:54:01.886550 kubelet[2758]: I0513 12:54:01.885871 2758 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-gsp6z" podStartSLOduration=30.658334275 podStartE2EDuration="45.885839654s" podCreationTimestamp="2025-05-13 12:53:16 +0000 UTC" firstStartedPulling="2025-05-13 12:53:45.695529664 +0000 UTC m=+41.518677865" lastFinishedPulling="2025-05-13 12:54:00.923035044 +0000 UTC m=+56.746183244" observedRunningTime="2025-05-13 12:54:01.880884755 +0000 UTC m=+57.704033005" watchObservedRunningTime="2025-05-13 12:54:01.885839654 +0000 UTC m=+57.708987904" May 13 12:54:03.504019 kubelet[2758]: I0513 12:54:03.503564 2758 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:54:12.800790 containerd[1551]: time="2025-05-13T12:54:12.800728887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"528d8cef70bb2d4f63c21246b6656def13cb2bfedc8147883d44ab376aa79a6a\" pid:4794 exited_at:{seconds:1747140852 nanos:800121028}" May 13 12:54:28.917470 containerd[1551]: time="2025-05-13T12:54:28.917414246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"3610230898869c5c1512b154373d6b06464a3520da74586706bae1c6c218f2a3\" pid:4828 exited_at:{seconds:1747140868 nanos:917187843}" May 13 12:54:31.657540 containerd[1551]: time="2025-05-13T12:54:31.657295139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"a72a62f16415656ebc62ae5a780325b890a2d88f6a07842377279c13d39b2f40\" pid:4850 exited_at:{seconds:1747140871 nanos:657059698}" May 13 12:54:42.901226 containerd[1551]: time="2025-05-13T12:54:42.901164604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"bf7ce364f44c236d110fcdabc7a2d44e2d3d4d6c4da677bc03d783b1c962c8fc\" pid:4874 exited_at:{seconds:1747140882 nanos:899897435}" May 13 12:54:50.303369 kernel: hrtimer: interrupt took 4350716 ns May 13 12:54:58.955611 containerd[1551]: time="2025-05-13T12:54:58.955299766Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"49915a25a6747f5b12b3e41ee5fba41f0f60b9127ce5599dbf3d75516a98c24a\" pid:4898 exited_at:{seconds:1747140898 nanos:954195183}" May 13 12:55:12.880565 containerd[1551]: time="2025-05-13T12:55:12.880411999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"4a17205ee61f2274d62c741279ed917b540046b74eefb15b4a201c343003a722\" pid:4932 exited_at:{seconds:1747140912 nanos:879656291}" May 13 12:55:28.947902 containerd[1551]: time="2025-05-13T12:55:28.947851533Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"5fdea38a095d8ca9ea7666d14c66b1e1a6890d6de21b6f523453e135e772cd81\" pid:4973 exited_at:{seconds:1747140928 nanos:947510303}" May 13 12:55:31.673267 containerd[1551]: time="2025-05-13T12:55:31.673165588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"24b6cf991ed14b763d0479d14ce17c44f8e9e4aaae69c0b30f114688f4752b69\" pid:4995 exited_at:{seconds:1747140931 nanos:671453127}" May 13 12:55:42.842316 containerd[1551]: time="2025-05-13T12:55:42.842255068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"8fe046bf4cd86510511116f12114084f93f8aea95063978fabf93f5065f22b7a\" pid:5020 exited_at:{seconds:1747140942 nanos:841888508}" May 13 12:55:58.968314 containerd[1551]: time="2025-05-13T12:55:58.968020627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"a1c7dfd6eaa3a641c254f2bbb4b672a17ab34c7031595063ae8d2dfef2317767\" pid:5047 exited_at:{seconds:1747140958 nanos:967437893}" May 13 12:56:12.835705 containerd[1551]: time="2025-05-13T12:56:12.835637287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"99d92d654cfb53c2d456db02a56425a0253c9cdaf54a0aa7c390d5f90ff8c521\" pid:5072 exited_at:{seconds:1747140972 nanos:833888611}" May 13 12:56:28.934777 containerd[1551]: time="2025-05-13T12:56:28.934459461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"9878cf13440906d9da9d3b4cdb5161d20ae968504d2900e51efeafdb69be4fb0\" pid:5105 exited_at:{seconds:1747140988 nanos:934207500}" May 13 12:56:31.653623 containerd[1551]: time="2025-05-13T12:56:31.653375412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"9243e0eb51fbeb27528eddbc7ac7d9dc35d16262e2616336b1141cd53f3473b0\" pid:5126 exited_at:{seconds:1747140991 nanos:652220725}" May 13 12:56:42.868671 containerd[1551]: time="2025-05-13T12:56:42.868382396Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"b6a9873ed1506c9858895b89d38c1ab570956b5632cc471fe73641ed69853f9a\" pid:5150 exited_at:{seconds:1747141002 nanos:866833579}" May 13 12:56:58.979446 containerd[1551]: time="2025-05-13T12:56:58.979195307Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"0a66bce13013d675d3c53c1d104a36a22f0cf1164ab2648deff72983ebe93f3b\" pid:5193 exited_at:{seconds:1747141018 nanos:978232395}" May 13 12:57:12.869329 containerd[1551]: time="2025-05-13T12:57:12.868950798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"eac4793ec1150be77b2405302accb4c27eac1d30c842221d4e5b31c9af3ce6e3\" pid:5221 exited_at:{seconds:1747141032 nanos:867799977}" May 13 12:57:29.005558 containerd[1551]: time="2025-05-13T12:57:29.005307898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"3c018c73e7ab0de7ededa16fbecd024791339eace1a2825c912327972bee4472\" pid:5244 exited_at:{seconds:1747141049 nanos:4905704}" May 13 12:57:31.664277 containerd[1551]: time="2025-05-13T12:57:31.664146900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"a26cb0af621f30453c5f4afc5cefa93ff949a09bc556463cad29acad6b429181\" pid:5266 exited_at:{seconds:1747141051 nanos:663722192}" May 13 12:57:37.291551 systemd[1]: Started sshd@7-172.24.4.211:22-172.24.4.1:32810.service - OpenSSH per-connection server daemon (172.24.4.1:32810). May 13 12:57:38.508469 sshd[5283]: Accepted publickey for core from 172.24.4.1 port 32810 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:57:38.514467 sshd-session[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:38.536884 systemd-logind[1524]: New session 10 of user core. May 13 12:57:38.545736 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 12:57:39.323358 sshd[5285]: Connection closed by 172.24.4.1 port 32810 May 13 12:57:39.324110 sshd-session[5283]: pam_unix(sshd:session): session closed for user core May 13 12:57:39.335031 systemd[1]: sshd@7-172.24.4.211:22-172.24.4.1:32810.service: Deactivated successfully. May 13 12:57:39.344445 systemd[1]: session-10.scope: Deactivated successfully. May 13 12:57:39.346823 systemd-logind[1524]: Session 10 logged out. Waiting for processes to exit. May 13 12:57:39.352142 systemd-logind[1524]: Removed session 10. May 13 12:57:42.868302 containerd[1551]: time="2025-05-13T12:57:42.868085181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"1bdc73bb3b39417648f0aff775e77986cd188e79cb29cf39fef502d14b5a53e4\" pid:5315 exited_at:{seconds:1747141062 nanos:865794207}" May 13 12:57:44.357430 systemd[1]: Started sshd@8-172.24.4.211:22-172.24.4.1:49244.service - OpenSSH per-connection server daemon (172.24.4.1:49244). May 13 12:57:45.459590 sshd[5327]: Accepted publickey for core from 172.24.4.1 port 49244 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:57:45.463538 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:45.474173 systemd-logind[1524]: New session 11 of user core. May 13 12:57:45.484768 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 12:57:46.346534 sshd[5329]: Connection closed by 172.24.4.1 port 49244 May 13 12:57:46.347885 sshd-session[5327]: pam_unix(sshd:session): session closed for user core May 13 12:57:46.369636 systemd[1]: sshd@8-172.24.4.211:22-172.24.4.1:49244.service: Deactivated successfully. May 13 12:57:46.380701 systemd[1]: session-11.scope: Deactivated successfully. May 13 12:57:46.388785 systemd-logind[1524]: Session 11 logged out. Waiting for processes to exit. May 13 12:57:46.395506 systemd-logind[1524]: Removed session 11. May 13 12:57:51.373064 systemd[1]: Started sshd@9-172.24.4.211:22-172.24.4.1:49258.service - OpenSSH per-connection server daemon (172.24.4.1:49258). May 13 12:57:52.613968 sshd[5343]: Accepted publickey for core from 172.24.4.1 port 49258 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:57:52.619217 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:52.631649 systemd-logind[1524]: New session 12 of user core. May 13 12:57:52.637853 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 12:57:53.459097 sshd[5345]: Connection closed by 172.24.4.1 port 49258 May 13 12:57:53.462435 sshd-session[5343]: pam_unix(sshd:session): session closed for user core May 13 12:57:53.480440 systemd-logind[1524]: Session 12 logged out. Waiting for processes to exit. May 13 12:57:53.483797 systemd[1]: sshd@9-172.24.4.211:22-172.24.4.1:49258.service: Deactivated successfully. May 13 12:57:53.493767 systemd[1]: session-12.scope: Deactivated successfully. May 13 12:57:53.504746 systemd-logind[1524]: Removed session 12. May 13 12:57:57.907235 containerd[1551]: time="2025-05-13T12:57:57.906431458Z" level=warning msg="container event discarded" container=0f83b75909ecc3354953212c8cf075ec0186791158e31bcde62f65f39b29b691 type=CONTAINER_CREATED_EVENT May 13 12:57:57.919877 containerd[1551]: time="2025-05-13T12:57:57.919646029Z" level=warning msg="container event discarded" container=0f83b75909ecc3354953212c8cf075ec0186791158e31bcde62f65f39b29b691 type=CONTAINER_STARTED_EVENT May 13 12:57:57.944291 containerd[1551]: time="2025-05-13T12:57:57.944161332Z" level=warning msg="container event discarded" container=4a847720c0ed102da04ad6eb911b8c0733e5c2d61dcaa43f44f37e36112fd57d type=CONTAINER_CREATED_EVENT May 13 12:57:57.944291 containerd[1551]: time="2025-05-13T12:57:57.944227680Z" level=warning msg="container event discarded" container=4a847720c0ed102da04ad6eb911b8c0733e5c2d61dcaa43f44f37e36112fd57d type=CONTAINER_STARTED_EVENT May 13 12:57:57.944291 containerd[1551]: time="2025-05-13T12:57:57.944248008Z" level=warning msg="container event discarded" container=0532b6df191d44686438f964db7d3d3c9419ecb08d1d63efe028809c75886829 type=CONTAINER_CREATED_EVENT May 13 12:57:57.980075 containerd[1551]: time="2025-05-13T12:57:57.979910639Z" level=warning msg="container event discarded" container=98b5dc6f4ef9e42d0f8806eb6e52abd0ae250b6d091f8760aabc936f2e1b4ac8 type=CONTAINER_CREATED_EVENT May 13 12:57:57.980075 containerd[1551]: time="2025-05-13T12:57:57.979987667Z" level=warning msg="container event discarded" container=98b5dc6f4ef9e42d0f8806eb6e52abd0ae250b6d091f8760aabc936f2e1b4ac8 type=CONTAINER_STARTED_EVENT May 13 12:57:57.980075 containerd[1551]: time="2025-05-13T12:57:57.980012574Z" level=warning msg="container event discarded" container=33ace06d6ce68c383a67f94fbc7bede07b053fbaca123a515abae16317ae1cf1 type=CONTAINER_CREATED_EVENT May 13 12:57:58.023495 containerd[1551]: time="2025-05-13T12:57:58.023298951Z" level=warning msg="container event discarded" container=3d71b0821609dc7c31efb18842411da5120f94d049d71f37da9e9206c55f1239 type=CONTAINER_CREATED_EVENT May 13 12:57:58.100037 containerd[1551]: time="2025-05-13T12:57:58.099862289Z" level=warning msg="container event discarded" container=33ace06d6ce68c383a67f94fbc7bede07b053fbaca123a515abae16317ae1cf1 type=CONTAINER_STARTED_EVENT May 13 12:57:58.113573 containerd[1551]: time="2025-05-13T12:57:58.113324120Z" level=warning msg="container event discarded" container=0532b6df191d44686438f964db7d3d3c9419ecb08d1d63efe028809c75886829 type=CONTAINER_STARTED_EVENT May 13 12:57:58.150419 containerd[1551]: time="2025-05-13T12:57:58.150204372Z" level=warning msg="container event discarded" container=3d71b0821609dc7c31efb18842411da5120f94d049d71f37da9e9206c55f1239 type=CONTAINER_STARTED_EVENT May 13 12:57:58.500792 systemd[1]: Started sshd@10-172.24.4.211:22-172.24.4.1:41120.service - OpenSSH per-connection server daemon (172.24.4.1:41120). May 13 12:57:58.941114 containerd[1551]: time="2025-05-13T12:57:58.941046339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"1590e0db2abb7ca77cd7a146d085249fcc7e560dd2102815fb1020e53f178856\" pid:5372 exited_at:{seconds:1747141078 nanos:940195678}" May 13 12:57:59.962917 sshd[5358]: Accepted publickey for core from 172.24.4.1 port 41120 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:57:59.967754 sshd-session[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:57:59.995437 systemd-logind[1524]: New session 13 of user core. May 13 12:58:00.001898 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 12:58:00.808216 sshd[5381]: Connection closed by 172.24.4.1 port 41120 May 13 12:58:00.805704 sshd-session[5358]: pam_unix(sshd:session): session closed for user core May 13 12:58:00.832776 systemd[1]: sshd@10-172.24.4.211:22-172.24.4.1:41120.service: Deactivated successfully. May 13 12:58:00.841201 systemd[1]: session-13.scope: Deactivated successfully. May 13 12:58:00.846311 systemd-logind[1524]: Session 13 logged out. Waiting for processes to exit. May 13 12:58:00.863089 systemd[1]: Started sshd@11-172.24.4.211:22-172.24.4.1:41122.service - OpenSSH per-connection server daemon (172.24.4.1:41122). May 13 12:58:00.870411 systemd-logind[1524]: Removed session 13. May 13 12:58:02.298952 sshd[5395]: Accepted publickey for core from 172.24.4.1 port 41122 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:02.302406 sshd-session[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:02.316254 systemd-logind[1524]: New session 14 of user core. May 13 12:58:02.323830 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 12:58:03.105605 sshd[5397]: Connection closed by 172.24.4.1 port 41122 May 13 12:58:03.106360 sshd-session[5395]: pam_unix(sshd:session): session closed for user core May 13 12:58:03.117212 systemd[1]: sshd@11-172.24.4.211:22-172.24.4.1:41122.service: Deactivated successfully. May 13 12:58:03.122162 systemd[1]: session-14.scope: Deactivated successfully. May 13 12:58:03.125048 systemd-logind[1524]: Session 14 logged out. Waiting for processes to exit. May 13 12:58:03.130138 systemd[1]: Started sshd@12-172.24.4.211:22-172.24.4.1:41134.service - OpenSSH per-connection server daemon (172.24.4.1:41134). May 13 12:58:03.133282 systemd-logind[1524]: Removed session 14. May 13 12:58:04.473172 sshd[5407]: Accepted publickey for core from 172.24.4.1 port 41134 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:04.477243 sshd-session[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:04.496379 systemd-logind[1524]: New session 15 of user core. May 13 12:58:04.506007 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 12:58:05.350085 sshd[5411]: Connection closed by 172.24.4.1 port 41134 May 13 12:58:05.350984 sshd-session[5407]: pam_unix(sshd:session): session closed for user core May 13 12:58:05.354529 systemd-logind[1524]: Session 15 logged out. Waiting for processes to exit. May 13 12:58:05.355942 systemd[1]: sshd@12-172.24.4.211:22-172.24.4.1:41134.service: Deactivated successfully. May 13 12:58:05.360839 systemd[1]: session-15.scope: Deactivated successfully. May 13 12:58:05.366846 systemd-logind[1524]: Removed session 15. May 13 12:58:09.652560 containerd[1551]: time="2025-05-13T12:58:09.652229880Z" level=warning msg="container event discarded" container=3cacb711590d28d2a0af61e073cd364a387e203c3447602e6e72a95dd44e522d type=CONTAINER_CREATED_EVENT May 13 12:58:09.652560 containerd[1551]: time="2025-05-13T12:58:09.652376903Z" level=warning msg="container event discarded" container=3cacb711590d28d2a0af61e073cd364a387e203c3447602e6e72a95dd44e522d type=CONTAINER_STARTED_EVENT May 13 12:58:09.913345 containerd[1551]: time="2025-05-13T12:58:09.912889797Z" level=warning msg="container event discarded" container=8d2877cbdefa928260925f20aa67f9df3580a81be0421a5312c6a3b3a89fd091 type=CONTAINER_CREATED_EVENT May 13 12:58:09.913345 containerd[1551]: time="2025-05-13T12:58:09.913004417Z" level=warning msg="container event discarded" container=8d2877cbdefa928260925f20aa67f9df3580a81be0421a5312c6a3b3a89fd091 type=CONTAINER_STARTED_EVENT May 13 12:58:09.943305 containerd[1551]: time="2025-05-13T12:58:09.943125748Z" level=warning msg="container event discarded" container=4f374db88f1df3a56805773caca0bdddd359de9abf0bb76f45d909b44ea662ce type=CONTAINER_CREATED_EVENT May 13 12:58:10.012906 containerd[1551]: time="2025-05-13T12:58:10.012617840Z" level=warning msg="container event discarded" container=4f374db88f1df3a56805773caca0bdddd359de9abf0bb76f45d909b44ea662ce type=CONTAINER_STARTED_EVENT May 13 12:58:10.377088 systemd[1]: Started sshd@13-172.24.4.211:22-172.24.4.1:41988.service - OpenSSH per-connection server daemon (172.24.4.1:41988). May 13 12:58:12.001735 sshd[5429]: Accepted publickey for core from 172.24.4.1 port 41988 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:12.004419 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:12.019086 systemd-logind[1524]: New session 16 of user core. May 13 12:58:12.027844 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 12:58:12.769096 containerd[1551]: time="2025-05-13T12:58:12.768656884Z" level=warning msg="container event discarded" container=04ae3f302b8a4016ac542254ccb408878d6a4b23a984e3d7adcf793419de7636 type=CONTAINER_CREATED_EVENT May 13 12:58:12.814344 containerd[1551]: time="2025-05-13T12:58:12.814129119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"7b8c3f99074d2fe7927a5cf519ba4d546a68e39f15078942056790fda72182e9\" pid:5452 exited_at:{seconds:1747141092 nanos:812977845}" May 13 12:58:12.819561 sshd[5431]: Connection closed by 172.24.4.1 port 41988 May 13 12:58:12.821312 sshd-session[5429]: pam_unix(sshd:session): session closed for user core May 13 12:58:12.824875 systemd-logind[1524]: Session 16 logged out. Waiting for processes to exit. May 13 12:58:12.826572 systemd[1]: sshd@13-172.24.4.211:22-172.24.4.1:41988.service: Deactivated successfully. May 13 12:58:12.829968 systemd[1]: session-16.scope: Deactivated successfully. May 13 12:58:12.834710 systemd-logind[1524]: Removed session 16. May 13 12:58:12.837530 containerd[1551]: time="2025-05-13T12:58:12.837385722Z" level=warning msg="container event discarded" container=04ae3f302b8a4016ac542254ccb408878d6a4b23a984e3d7adcf793419de7636 type=CONTAINER_STARTED_EVENT May 13 12:58:16.546816 containerd[1551]: time="2025-05-13T12:58:16.546594564Z" level=warning msg="container event discarded" container=8e69dd8fb7c01b1bc8e6b7ecb011ae4339577073a4c3920748862d00183f24b8 type=CONTAINER_CREATED_EVENT May 13 12:58:16.546816 containerd[1551]: time="2025-05-13T12:58:16.546722350Z" level=warning msg="container event discarded" container=8e69dd8fb7c01b1bc8e6b7ecb011ae4339577073a4c3920748862d00183f24b8 type=CONTAINER_STARTED_EVENT May 13 12:58:16.578755 containerd[1551]: time="2025-05-13T12:58:16.578601960Z" level=warning msg="container event discarded" container=4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887 type=CONTAINER_CREATED_EVENT May 13 12:58:16.579185 containerd[1551]: time="2025-05-13T12:58:16.579071113Z" level=warning msg="container event discarded" container=4e1f5c0d4cb759ec083355c6874fa5692e1c5da4362fe9b889ea0e1f91d33887 type=CONTAINER_STARTED_EVENT May 13 12:58:17.850339 systemd[1]: Started sshd@14-172.24.4.211:22-172.24.4.1:60574.service - OpenSSH per-connection server daemon (172.24.4.1:60574). May 13 12:58:19.336173 sshd[5468]: Accepted publickey for core from 172.24.4.1 port 60574 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:19.339001 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:19.347170 systemd-logind[1524]: New session 17 of user core. May 13 12:58:19.353695 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 12:58:19.977543 sshd[5470]: Connection closed by 172.24.4.1 port 60574 May 13 12:58:19.976383 sshd-session[5468]: pam_unix(sshd:session): session closed for user core May 13 12:58:19.985182 systemd-logind[1524]: Session 17 logged out. Waiting for processes to exit. May 13 12:58:19.985425 systemd[1]: sshd@14-172.24.4.211:22-172.24.4.1:60574.service: Deactivated successfully. May 13 12:58:19.988032 containerd[1551]: time="2025-05-13T12:58:19.987881194Z" level=warning msg="container event discarded" container=919c00044bca08cc5de60fc3d836c1d92f0b819de27040becf1bf0c656eaddb2 type=CONTAINER_CREATED_EVENT May 13 12:58:19.996055 systemd[1]: session-17.scope: Deactivated successfully. May 13 12:58:20.004609 systemd-logind[1524]: Removed session 17. May 13 12:58:20.078546 containerd[1551]: time="2025-05-13T12:58:20.078273124Z" level=warning msg="container event discarded" container=919c00044bca08cc5de60fc3d836c1d92f0b819de27040becf1bf0c656eaddb2 type=CONTAINER_STARTED_EVENT May 13 12:58:21.959963 containerd[1551]: time="2025-05-13T12:58:21.959712758Z" level=warning msg="container event discarded" container=26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39 type=CONTAINER_CREATED_EVENT May 13 12:58:22.035338 containerd[1551]: time="2025-05-13T12:58:22.035162200Z" level=warning msg="container event discarded" container=26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39 type=CONTAINER_STARTED_EVENT May 13 12:58:22.845128 containerd[1551]: time="2025-05-13T12:58:22.844932032Z" level=warning msg="container event discarded" container=26788cb866ca7377df7872a448f918eae037adf637de896d4dee1bfc38d1cb39 type=CONTAINER_STOPPED_EVENT May 13 12:58:25.009814 systemd[1]: Started sshd@15-172.24.4.211:22-172.24.4.1:42606.service - OpenSSH per-connection server daemon (172.24.4.1:42606). May 13 12:58:26.300033 sshd[5498]: Accepted publickey for core from 172.24.4.1 port 42606 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:26.304822 sshd-session[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:26.328989 systemd-logind[1524]: New session 18 of user core. May 13 12:58:26.342072 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 12:58:27.052738 sshd[5500]: Connection closed by 172.24.4.1 port 42606 May 13 12:58:27.052337 sshd-session[5498]: pam_unix(sshd:session): session closed for user core May 13 12:58:27.072294 systemd[1]: sshd@15-172.24.4.211:22-172.24.4.1:42606.service: Deactivated successfully. May 13 12:58:27.077406 systemd[1]: session-18.scope: Deactivated successfully. May 13 12:58:27.081343 systemd-logind[1524]: Session 18 logged out. Waiting for processes to exit. May 13 12:58:27.094149 systemd[1]: Started sshd@16-172.24.4.211:22-172.24.4.1:42610.service - OpenSSH per-connection server daemon (172.24.4.1:42610). May 13 12:58:27.095891 systemd-logind[1524]: Removed session 18. May 13 12:58:28.626897 sshd[5512]: Accepted publickey for core from 172.24.4.1 port 42610 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:28.630182 sshd-session[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:28.647875 systemd-logind[1524]: New session 19 of user core. May 13 12:58:28.672929 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 12:58:28.976173 containerd[1551]: time="2025-05-13T12:58:28.975955453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"2849f76ff35161b9271c459b6b8d09620c8f47142dd92c0d510a9385cb4d8454\" pid:5528 exited_at:{seconds:1747141108 nanos:974783072}" May 13 12:58:29.791139 sshd[5514]: Connection closed by 172.24.4.1 port 42610 May 13 12:58:29.792585 sshd-session[5512]: pam_unix(sshd:session): session closed for user core May 13 12:58:29.814688 systemd[1]: sshd@16-172.24.4.211:22-172.24.4.1:42610.service: Deactivated successfully. May 13 12:58:29.822596 systemd[1]: session-19.scope: Deactivated successfully. May 13 12:58:29.824024 containerd[1551]: time="2025-05-13T12:58:29.823784077Z" level=warning msg="container event discarded" container=8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e type=CONTAINER_CREATED_EVENT May 13 12:58:29.827253 systemd-logind[1524]: Session 19 logged out. Waiting for processes to exit. May 13 12:58:29.840084 systemd[1]: Started sshd@17-172.24.4.211:22-172.24.4.1:42614.service - OpenSSH per-connection server daemon (172.24.4.1:42614). May 13 12:58:29.844731 systemd-logind[1524]: Removed session 19. May 13 12:58:29.913643 containerd[1551]: time="2025-05-13T12:58:29.913540076Z" level=warning msg="container event discarded" container=8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e type=CONTAINER_STARTED_EVENT May 13 12:58:31.030674 sshd[5546]: Accepted publickey for core from 172.24.4.1 port 42614 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:31.033405 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:31.048925 systemd-logind[1524]: New session 20 of user core. May 13 12:58:31.058816 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 12:58:31.654771 containerd[1551]: time="2025-05-13T12:58:31.654711061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"d9669daa7b99e68e40a06b6ef40d270cda355c345b9fa25a066a8df915f5a106\" pid:5568 exited_at:{seconds:1747141111 nanos:654353101}" May 13 12:58:31.963720 containerd[1551]: time="2025-05-13T12:58:31.963384651Z" level=warning msg="container event discarded" container=8928bd27833684d44bd0b59e7c8e4f3a488aa1f9a38ddc7c0233e3e2f93d376e type=CONTAINER_STOPPED_EVENT May 13 12:58:33.435535 sshd[5548]: Connection closed by 172.24.4.1 port 42614 May 13 12:58:33.437591 sshd-session[5546]: pam_unix(sshd:session): session closed for user core May 13 12:58:33.456024 systemd[1]: sshd@17-172.24.4.211:22-172.24.4.1:42614.service: Deactivated successfully. May 13 12:58:33.462534 systemd[1]: session-20.scope: Deactivated successfully. May 13 12:58:33.468961 systemd-logind[1524]: Session 20 logged out. Waiting for processes to exit. May 13 12:58:33.475838 systemd-logind[1524]: Removed session 20. May 13 12:58:33.480076 systemd[1]: Started sshd@18-172.24.4.211:22-172.24.4.1:42630.service - OpenSSH per-connection server daemon (172.24.4.1:42630). May 13 12:58:34.763011 sshd[5589]: Accepted publickey for core from 172.24.4.1 port 42630 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:34.765648 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:34.786746 systemd-logind[1524]: New session 21 of user core. May 13 12:58:34.796845 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 12:58:35.814984 sshd[5591]: Connection closed by 172.24.4.1 port 42630 May 13 12:58:35.817017 sshd-session[5589]: pam_unix(sshd:session): session closed for user core May 13 12:58:35.839231 systemd[1]: sshd@18-172.24.4.211:22-172.24.4.1:42630.service: Deactivated successfully. May 13 12:58:35.844534 systemd[1]: session-21.scope: Deactivated successfully. May 13 12:58:35.849396 systemd-logind[1524]: Session 21 logged out. Waiting for processes to exit. May 13 12:58:35.857731 systemd[1]: Started sshd@19-172.24.4.211:22-172.24.4.1:53768.service - OpenSSH per-connection server daemon (172.24.4.1:53768). May 13 12:58:35.861029 systemd-logind[1524]: Removed session 21. May 13 12:58:37.006370 sshd[5601]: Accepted publickey for core from 172.24.4.1 port 53768 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:37.011814 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:37.025611 systemd-logind[1524]: New session 22 of user core. May 13 12:58:37.029670 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 12:58:37.840605 sshd[5603]: Connection closed by 172.24.4.1 port 53768 May 13 12:58:37.841942 sshd-session[5601]: pam_unix(sshd:session): session closed for user core May 13 12:58:37.857620 systemd[1]: sshd@19-172.24.4.211:22-172.24.4.1:53768.service: Deactivated successfully. May 13 12:58:37.865988 systemd[1]: session-22.scope: Deactivated successfully. May 13 12:58:37.870314 systemd-logind[1524]: Session 22 logged out. Waiting for processes to exit. May 13 12:58:37.874602 systemd-logind[1524]: Removed session 22. May 13 12:58:41.507790 containerd[1551]: time="2025-05-13T12:58:41.506935662Z" level=warning msg="container event discarded" container=e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c type=CONTAINER_CREATED_EVENT May 13 12:58:41.591944 containerd[1551]: time="2025-05-13T12:58:41.591856375Z" level=warning msg="container event discarded" container=e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c type=CONTAINER_STARTED_EVENT May 13 12:58:42.840059 containerd[1551]: time="2025-05-13T12:58:42.840016183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"15752121721b3eecc5b334eb152ed5d07bf8e027e6066e27dc9ad0a986fe710c\" pid:5632 exited_at:{seconds:1747141122 nanos:839621261}" May 13 12:58:42.860970 systemd[1]: Started sshd@20-172.24.4.211:22-172.24.4.1:53776.service - OpenSSH per-connection server daemon (172.24.4.1:53776). May 13 12:58:43.999616 update_engine[1526]: I20250513 12:58:43.999366 1526 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 13 12:58:43.999616 update_engine[1526]: I20250513 12:58:43.999577 1526 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 13 12:58:44.007528 update_engine[1526]: I20250513 12:58:44.000543 1526 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 13 12:58:44.007528 update_engine[1526]: I20250513 12:58:44.006858 1526 omaha_request_params.cc:62] Current group set to developer May 13 12:58:44.008979 update_engine[1526]: I20250513 12:58:44.008757 1526 update_attempter.cc:499] Already updated boot flags. Skipping. May 13 12:58:44.008979 update_engine[1526]: I20250513 12:58:44.008782 1526 update_attempter.cc:643] Scheduling an action processor start. May 13 12:58:44.008979 update_engine[1526]: I20250513 12:58:44.008826 1526 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 12:58:44.009280 update_engine[1526]: I20250513 12:58:44.009007 1526 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 13 12:58:44.009280 update_engine[1526]: I20250513 12:58:44.009231 1526 omaha_request_action.cc:271] Posting an Omaha request to disabled May 13 12:58:44.009280 update_engine[1526]: I20250513 12:58:44.009245 1526 omaha_request_action.cc:272] Request: May 13 12:58:44.009280 update_engine[1526]: May 13 12:58:44.009280 update_engine[1526]: May 13 12:58:44.009280 update_engine[1526]: May 13 12:58:44.009280 update_engine[1526]: May 13 12:58:44.009280 update_engine[1526]: May 13 12:58:44.009280 update_engine[1526]: May 13 12:58:44.009280 update_engine[1526]: May 13 12:58:44.009280 update_engine[1526]: May 13 12:58:44.009280 update_engine[1526]: I20250513 12:58:44.009275 1526 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 12:58:44.018612 update_engine[1526]: I20250513 12:58:44.017274 1526 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 12:58:44.018612 update_engine[1526]: I20250513 12:58:44.018301 1526 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 12:58:44.026547 update_engine[1526]: E20250513 12:58:44.025794 1526 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 12:58:44.026547 update_engine[1526]: I20250513 12:58:44.025904 1526 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 13 12:58:44.035965 locksmithd[1572]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 13 12:58:44.042406 sshd[5644]: Accepted publickey for core from 172.24.4.1 port 53776 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:44.044807 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:44.052903 systemd-logind[1524]: New session 23 of user core. May 13 12:58:44.061734 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 12:58:44.757650 sshd[5646]: Connection closed by 172.24.4.1 port 53776 May 13 12:58:44.759923 sshd-session[5644]: pam_unix(sshd:session): session closed for user core May 13 12:58:44.771044 systemd[1]: sshd@20-172.24.4.211:22-172.24.4.1:53776.service: Deactivated successfully. May 13 12:58:44.777627 systemd[1]: session-23.scope: Deactivated successfully. May 13 12:58:44.782365 systemd-logind[1524]: Session 23 logged out. Waiting for processes to exit. May 13 12:58:44.787051 systemd-logind[1524]: Removed session 23. May 13 12:58:45.701022 containerd[1551]: time="2025-05-13T12:58:45.700653629Z" level=warning msg="container event discarded" container=9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e type=CONTAINER_CREATED_EVENT May 13 12:58:45.701022 containerd[1551]: time="2025-05-13T12:58:45.700976092Z" level=warning msg="container event discarded" container=9fb3eef6f478d68c4cacbc2a1b7b8f7dba3e096080b691782d31037237282a6e type=CONTAINER_STARTED_EVENT May 13 12:58:45.723533 containerd[1551]: time="2025-05-13T12:58:45.723305109Z" level=warning msg="container event discarded" container=2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e type=CONTAINER_CREATED_EVENT May 13 12:58:45.723533 containerd[1551]: time="2025-05-13T12:58:45.723468396Z" level=warning msg="container event discarded" container=2fcb79ab1c114d57d2c498cb3b4b82977ef192cf9a73f8937030ed32ec526d7e type=CONTAINER_STARTED_EVENT May 13 12:58:45.792201 containerd[1551]: time="2025-05-13T12:58:45.791911433Z" level=warning msg="container event discarded" container=9f214cb021f5d83cf7fb6b12c9bb578704190f9eed7f48cc461b42180d0c7fca type=CONTAINER_CREATED_EVENT May 13 12:58:45.922664 containerd[1551]: time="2025-05-13T12:58:45.922386219Z" level=warning msg="container event discarded" container=9f214cb021f5d83cf7fb6b12c9bb578704190f9eed7f48cc461b42180d0c7fca type=CONTAINER_STARTED_EVENT May 13 12:58:46.089363 containerd[1551]: time="2025-05-13T12:58:46.089132433Z" level=warning msg="container event discarded" container=106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49 type=CONTAINER_CREATED_EVENT May 13 12:58:46.089732 containerd[1551]: time="2025-05-13T12:58:46.089374030Z" level=warning msg="container event discarded" container=106af271c0a88ce67e568492518f335790d8183381ff5c9798813c696e87bb49 type=CONTAINER_STARTED_EVENT May 13 12:58:46.107838 containerd[1551]: time="2025-05-13T12:58:46.107744893Z" level=warning msg="container event discarded" container=ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7 type=CONTAINER_CREATED_EVENT May 13 12:58:46.107838 containerd[1551]: time="2025-05-13T12:58:46.107820590Z" level=warning msg="container event discarded" container=ba3a21a7ba60362e2b17dd5de4ee7acb8639b2e8b287c0e41436656552d929f7 type=CONTAINER_STARTED_EVENT May 13 12:58:46.881373 containerd[1551]: time="2025-05-13T12:58:46.881186698Z" level=warning msg="container event discarded" container=1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0 type=CONTAINER_CREATED_EVENT May 13 12:58:46.881373 containerd[1551]: time="2025-05-13T12:58:46.881368389Z" level=warning msg="container event discarded" container=1e611f694a7ae86b1f734b54222dd9cfbcd612df07f98756b712e8e0eba40eb0 type=CONTAINER_STARTED_EVENT May 13 12:58:47.921672 containerd[1551]: time="2025-05-13T12:58:47.921322785Z" level=warning msg="container event discarded" container=374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211 type=CONTAINER_CREATED_EVENT May 13 12:58:47.922771 containerd[1551]: time="2025-05-13T12:58:47.922649549Z" level=warning msg="container event discarded" container=374d815f552f046802ac648de71b8446a46e4f14f168057354bc207ec62e6211 type=CONTAINER_STARTED_EVENT May 13 12:58:47.990237 containerd[1551]: time="2025-05-13T12:58:47.990130376Z" level=warning msg="container event discarded" container=dc45baf005f4d821de97ed723314e7d99c8a147ba653ab16f83a9de06645769c type=CONTAINER_CREATED_EVENT May 13 12:58:48.069372 containerd[1551]: time="2025-05-13T12:58:48.069256545Z" level=warning msg="container event discarded" container=dc45baf005f4d821de97ed723314e7d99c8a147ba653ab16f83a9de06645769c type=CONTAINER_STARTED_EVENT May 13 12:58:48.268060 containerd[1551]: time="2025-05-13T12:58:48.267724404Z" level=warning msg="container event discarded" container=fc07ad63684b96333ae5f047a79864751140bd2bad66838cc9ae9de1356b93e5 type=CONTAINER_CREATED_EVENT May 13 12:58:48.337735 containerd[1551]: time="2025-05-13T12:58:48.337658147Z" level=warning msg="container event discarded" container=fc07ad63684b96333ae5f047a79864751140bd2bad66838cc9ae9de1356b93e5 type=CONTAINER_STARTED_EVENT May 13 12:58:49.795230 systemd[1]: Started sshd@21-172.24.4.211:22-172.24.4.1:39698.service - OpenSSH per-connection server daemon (172.24.4.1:39698). May 13 12:58:50.934343 sshd[5658]: Accepted publickey for core from 172.24.4.1 port 39698 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:50.938113 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:50.983242 systemd-logind[1524]: New session 24 of user core. May 13 12:58:50.999400 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 12:58:51.909528 sshd[5660]: Connection closed by 172.24.4.1 port 39698 May 13 12:58:51.910900 sshd-session[5658]: pam_unix(sshd:session): session closed for user core May 13 12:58:51.920060 systemd[1]: sshd@21-172.24.4.211:22-172.24.4.1:39698.service: Deactivated successfully. May 13 12:58:51.927641 systemd[1]: session-24.scope: Deactivated successfully. May 13 12:58:51.930455 systemd-logind[1524]: Session 24 logged out. Waiting for processes to exit. May 13 12:58:51.934758 systemd-logind[1524]: Removed session 24. May 13 12:58:53.338243 containerd[1551]: time="2025-05-13T12:58:53.337917580Z" level=warning msg="container event discarded" container=6db20dadfa3f75974145a7432eb3dca184c357f2f992e7f2a974a8dc4dfc4fac type=CONTAINER_CREATED_EVENT May 13 12:58:53.422756 containerd[1551]: time="2025-05-13T12:58:53.422639011Z" level=warning msg="container event discarded" container=6db20dadfa3f75974145a7432eb3dca184c357f2f992e7f2a974a8dc4dfc4fac type=CONTAINER_STARTED_EVENT May 13 12:58:53.906863 containerd[1551]: time="2025-05-13T12:58:53.906713119Z" level=warning msg="container event discarded" container=ed606ff06215d70015d4c07852602badb8f11a6ec5be76ec817f4807a8c1eb99 type=CONTAINER_CREATED_EVENT May 13 12:58:53.996022 update_engine[1526]: I20250513 12:58:53.995762 1526 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 12:58:53.997027 update_engine[1526]: I20250513 12:58:53.996886 1526 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 12:58:53.997438 containerd[1551]: time="2025-05-13T12:58:53.997181915Z" level=warning msg="container event discarded" container=ed606ff06215d70015d4c07852602badb8f11a6ec5be76ec817f4807a8c1eb99 type=CONTAINER_STARTED_EVENT May 13 12:58:53.998900 update_engine[1526]: I20250513 12:58:53.997714 1526 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 12:58:54.003203 update_engine[1526]: E20250513 12:58:54.002840 1526 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 12:58:54.003203 update_engine[1526]: I20250513 12:58:54.003063 1526 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 13 12:58:56.938057 systemd[1]: Started sshd@22-172.24.4.211:22-172.24.4.1:50782.service - OpenSSH per-connection server daemon (172.24.4.1:50782). May 13 12:58:58.055742 sshd[5672]: Accepted publickey for core from 172.24.4.1 port 50782 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:58:58.060394 sshd-session[5672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:58:58.075982 systemd-logind[1524]: New session 25 of user core. May 13 12:58:58.082966 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 12:58:58.527299 containerd[1551]: time="2025-05-13T12:58:58.527045571Z" level=warning msg="container event discarded" container=1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d type=CONTAINER_CREATED_EVENT May 13 12:58:58.618537 containerd[1551]: time="2025-05-13T12:58:58.618452295Z" level=warning msg="container event discarded" container=1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d type=CONTAINER_STARTED_EVENT May 13 12:58:58.776056 sshd[5674]: Connection closed by 172.24.4.1 port 50782 May 13 12:58:58.777469 sshd-session[5672]: pam_unix(sshd:session): session closed for user core May 13 12:58:58.787054 systemd-logind[1524]: Session 25 logged out. Waiting for processes to exit. May 13 12:58:58.787397 systemd[1]: sshd@22-172.24.4.211:22-172.24.4.1:50782.service: Deactivated successfully. May 13 12:58:58.793618 systemd[1]: session-25.scope: Deactivated successfully. May 13 12:58:58.801606 systemd-logind[1524]: Removed session 25. May 13 12:58:58.935972 containerd[1551]: time="2025-05-13T12:58:58.935917461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"5b5ba35bfb3727cd1162d46284089781a0710cb407c5edd81beb2713a4c18a03\" pid:5698 exited_at:{seconds:1747141138 nanos:935250280}" May 13 12:59:01.164576 containerd[1551]: time="2025-05-13T12:59:01.164315796Z" level=warning msg="container event discarded" container=b308af4d0d8e867e692928a696b01b1859a14586203c1b10ee3b7b00955c989a type=CONTAINER_CREATED_EVENT May 13 12:59:01.286473 containerd[1551]: time="2025-05-13T12:59:01.286023161Z" level=warning msg="container event discarded" container=b308af4d0d8e867e692928a696b01b1859a14586203c1b10ee3b7b00955c989a type=CONTAINER_STARTED_EVENT May 13 12:59:03.822622 systemd[1]: Started sshd@23-172.24.4.211:22-172.24.4.1:45892.service - OpenSSH per-connection server daemon (172.24.4.1:45892). May 13 12:59:03.999226 update_engine[1526]: I20250513 12:59:03.998346 1526 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 12:59:04.000131 update_engine[1526]: I20250513 12:59:04.000083 1526 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 12:59:04.002360 update_engine[1526]: I20250513 12:59:04.002209 1526 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 12:59:04.008090 update_engine[1526]: E20250513 12:59:04.007812 1526 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 12:59:04.008090 update_engine[1526]: I20250513 12:59:04.007987 1526 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 13 12:59:04.964154 sshd[5708]: Accepted publickey for core from 172.24.4.1 port 45892 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:59:04.968134 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:59:04.986568 systemd-logind[1524]: New session 26 of user core. May 13 12:59:04.999869 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 12:59:05.870249 sshd[5718]: Connection closed by 172.24.4.1 port 45892 May 13 12:59:05.872803 sshd-session[5708]: pam_unix(sshd:session): session closed for user core May 13 12:59:05.893398 systemd[1]: sshd@23-172.24.4.211:22-172.24.4.1:45892.service: Deactivated successfully. May 13 12:59:05.901842 systemd[1]: session-26.scope: Deactivated successfully. May 13 12:59:05.904742 systemd-logind[1524]: Session 26 logged out. Waiting for processes to exit. May 13 12:59:05.909327 systemd-logind[1524]: Removed session 26. May 13 12:59:10.894250 systemd[1]: Started sshd@24-172.24.4.211:22-172.24.4.1:45902.service - OpenSSH per-connection server daemon (172.24.4.1:45902). May 13 12:59:12.016655 sshd[5735]: Accepted publickey for core from 172.24.4.1 port 45902 ssh2: RSA SHA256:sq89ZFhMmPxpdS8EaM1KLau1WYnnN1aONVvOexr+LvY May 13 12:59:12.018863 sshd-session[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:59:12.039618 systemd-logind[1524]: New session 27 of user core. May 13 12:59:12.048858 systemd[1]: Started session-27.scope - Session 27 of User core. May 13 12:59:12.758582 sshd[5737]: Connection closed by 172.24.4.1 port 45902 May 13 12:59:12.760728 sshd-session[5735]: pam_unix(sshd:session): session closed for user core May 13 12:59:12.775627 systemd[1]: sshd@24-172.24.4.211:22-172.24.4.1:45902.service: Deactivated successfully. May 13 12:59:12.783858 systemd[1]: session-27.scope: Deactivated successfully. May 13 12:59:12.790985 systemd-logind[1524]: Session 27 logged out. Waiting for processes to exit. May 13 12:59:12.796338 systemd-logind[1524]: Removed session 27. May 13 12:59:12.868291 containerd[1551]: time="2025-05-13T12:59:12.868124619Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"914ec22aef47143f335d3ca4e2730407258cc4e196117c97f931afe256d1f0bf\" pid:5757 exited_at:{seconds:1747141152 nanos:866947158}" May 13 12:59:13.993327 update_engine[1526]: I20250513 12:59:13.992992 1526 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 12:59:13.995548 update_engine[1526]: I20250513 12:59:13.994803 1526 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 12:59:13.996260 update_engine[1526]: I20250513 12:59:13.996205 1526 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 12:59:14.001368 update_engine[1526]: E20250513 12:59:14.001080 1526 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 12:59:14.001368 update_engine[1526]: I20250513 12:59:14.001199 1526 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 12:59:14.001368 update_engine[1526]: I20250513 12:59:14.001237 1526 omaha_request_action.cc:617] Omaha request response: May 13 12:59:14.002166 update_engine[1526]: E20250513 12:59:14.002074 1526 omaha_request_action.cc:636] Omaha request network transfer failed. May 13 12:59:14.002917 update_engine[1526]: I20250513 12:59:14.002838 1526 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 13 12:59:14.002917 update_engine[1526]: I20250513 12:59:14.002879 1526 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 12:59:14.002917 update_engine[1526]: I20250513 12:59:14.002904 1526 update_attempter.cc:306] Processing Done. May 13 12:59:14.003236 update_engine[1526]: E20250513 12:59:14.003161 1526 update_attempter.cc:619] Update failed. May 13 12:59:14.003449 update_engine[1526]: I20250513 12:59:14.003381 1526 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 13 12:59:14.003449 update_engine[1526]: I20250513 12:59:14.003417 1526 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 13 12:59:14.003813 update_engine[1526]: I20250513 12:59:14.003432 1526 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 13 12:59:14.004554 update_engine[1526]: I20250513 12:59:14.003995 1526 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 12:59:14.004554 update_engine[1526]: I20250513 12:59:14.004213 1526 omaha_request_action.cc:271] Posting an Omaha request to disabled May 13 12:59:14.004554 update_engine[1526]: I20250513 12:59:14.004240 1526 omaha_request_action.cc:272] Request: May 13 12:59:14.004554 update_engine[1526]: May 13 12:59:14.004554 update_engine[1526]: May 13 12:59:14.004554 update_engine[1526]: May 13 12:59:14.004554 update_engine[1526]: May 13 12:59:14.004554 update_engine[1526]: May 13 12:59:14.004554 update_engine[1526]: May 13 12:59:14.004554 update_engine[1526]: I20250513 12:59:14.004253 1526 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 12:59:14.006057 update_engine[1526]: I20250513 12:59:14.004688 1526 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 12:59:14.006057 update_engine[1526]: I20250513 12:59:14.005187 1526 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 13 12:59:14.009237 locksmithd[1572]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 13 12:59:14.010744 update_engine[1526]: E20250513 12:59:14.010666 1526 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 12:59:14.010917 update_engine[1526]: I20250513 12:59:14.010768 1526 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 12:59:14.010917 update_engine[1526]: I20250513 12:59:14.010788 1526 omaha_request_action.cc:617] Omaha request response: May 13 12:59:14.010917 update_engine[1526]: I20250513 12:59:14.010802 1526 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 12:59:14.010917 update_engine[1526]: I20250513 12:59:14.010814 1526 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 12:59:14.010917 update_engine[1526]: I20250513 12:59:14.010825 1526 update_attempter.cc:306] Processing Done. May 13 12:59:14.010917 update_engine[1526]: I20250513 12:59:14.010838 1526 update_attempter.cc:310] Error event sent. May 13 12:59:14.010917 update_engine[1526]: I20250513 12:59:14.010880 1526 update_check_scheduler.cc:74] Next update check in 42m57s May 13 12:59:14.012963 locksmithd[1572]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 13 12:59:28.961858 containerd[1551]: time="2025-05-13T12:59:28.961727583Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"9bf9eaa987313ccd38008e7e90b4ab713c96732bd63492424b39d750c675ae2d\" pid:5782 exited_at:{seconds:1747141168 nanos:960659131}" May 13 12:59:31.673065 containerd[1551]: time="2025-05-13T12:59:31.672760341Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"101047605c7fbca32f08950c54b37e7e75898fcedcfec8cf6caa19596a75049c\" pid:5804 exited_at:{seconds:1747141171 nanos:671451232}" May 13 12:59:42.866441 containerd[1551]: time="2025-05-13T12:59:42.866093377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"a9fc52df00f7b08a50d1a9e958a08ae4075a3b2db2e41e7b34c98543b5c87d81\" pid:5827 exited_at:{seconds:1747141182 nanos:864596392}" May 13 12:59:58.929334 containerd[1551]: time="2025-05-13T12:59:58.929234740Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c250149e4abcc57d5bab6d0d4700840075bf7b627389b4e27737d242506fc6d\" id:\"72ab742453895167de7ade77a96ac78cfc9ef6b4ff3fba74fc48d9586dbe3808\" pid:5868 exited_at:{seconds:1747141198 nanos:928646188}" May 13 13:00:12.905241 containerd[1551]: time="2025-05-13T13:00:12.903879940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e093301312d53b9a3dbfcbaeeff69ceb5eb24eca2895303ad8f4d6e304a0267c\" id:\"eb176c62cdb6e1e01a901af4fd63cad85e308bf5b5a89e44ae4f800766880b73\" pid:5895 exited_at:{seconds:1747141212 nanos:895949149}"