May 9 01:12:36.090012 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:15:16 -00 2025 May 9 01:12:36.090042 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6dbb211661f4d09f7718fdc7eab00f1550a8baafb68f4d2efdaedafa102351ae May 9 01:12:36.090052 kernel: BIOS-provided physical RAM map: May 9 01:12:36.090061 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 9 01:12:36.090068 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 9 01:12:36.090078 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 9 01:12:36.090087 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 9 01:12:36.090095 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 9 01:12:36.090103 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 9 01:12:36.090110 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 9 01:12:36.090118 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 9 01:12:36.090126 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 9 01:12:36.090133 kernel: NX (Execute Disable) protection: active May 9 01:12:36.090153 kernel: APIC: Static calls initialized May 9 01:12:36.090165 kernel: SMBIOS 3.0.0 present. May 9 01:12:36.090173 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 9 01:12:36.090182 kernel: Hypervisor detected: KVM May 9 01:12:36.090190 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 01:12:36.090198 kernel: kvm-clock: using sched offset of 3647360690 cycles May 9 01:12:36.090209 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 01:12:36.090217 kernel: tsc: Detected 1996.249 MHz processor May 9 01:12:36.090226 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 01:12:36.090234 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 01:12:36.090243 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 9 01:12:36.090251 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 9 01:12:36.090260 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 01:12:36.090268 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 9 01:12:36.090276 kernel: ACPI: Early table checksum verification disabled May 9 01:12:36.090287 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 9 01:12:36.090295 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 01:12:36.090303 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 01:12:36.090311 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 01:12:36.090320 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 9 01:12:36.090328 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 01:12:36.090336 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 01:12:36.090344 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 9 01:12:36.090353 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 9 01:12:36.090363 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 9 01:12:36.090371 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 9 01:12:36.090379 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 9 01:12:36.090390 kernel: No NUMA configuration found May 9 01:12:36.090399 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 9 01:12:36.090407 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 9 01:12:36.090416 kernel: Zone ranges: May 9 01:12:36.090426 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 01:12:36.090435 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 9 01:12:36.090444 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 9 01:12:36.090452 kernel: Movable zone start for each node May 9 01:12:36.090461 kernel: Early memory node ranges May 9 01:12:36.095528 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 9 01:12:36.095538 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 9 01:12:36.095547 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 9 01:12:36.095560 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 9 01:12:36.095569 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 01:12:36.095578 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 9 01:12:36.095587 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 9 01:12:36.095596 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 01:12:36.095605 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 01:12:36.095614 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 01:12:36.095622 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 01:12:36.095631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 01:12:36.095642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 01:12:36.095651 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 01:12:36.095660 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 01:12:36.095668 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 01:12:36.095677 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 9 01:12:36.095686 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 01:12:36.095694 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 9 01:12:36.095703 kernel: Booting paravirtualized kernel on KVM May 9 01:12:36.095712 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 01:12:36.095723 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 9 01:12:36.095732 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 9 01:12:36.095740 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 9 01:12:36.095749 kernel: pcpu-alloc: [0] 0 1 May 9 01:12:36.095757 kernel: kvm-guest: PV spinlocks disabled, no host support May 9 01:12:36.095768 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6dbb211661f4d09f7718fdc7eab00f1550a8baafb68f4d2efdaedafa102351ae May 9 01:12:36.095777 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 01:12:36.095788 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 01:12:36.095797 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 01:12:36.095805 kernel: Fallback order for Node 0: 0 May 9 01:12:36.095814 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 9 01:12:36.095822 kernel: Policy zone: Normal May 9 01:12:36.095831 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 01:12:36.095840 kernel: software IO TLB: area num 2. May 9 01:12:36.095849 kernel: Memory: 3962108K/4193772K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 231404K reserved, 0K cma-reserved) May 9 01:12:36.095858 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 01:12:36.095868 kernel: ftrace: allocating 37993 entries in 149 pages May 9 01:12:36.095877 kernel: ftrace: allocated 149 pages with 4 groups May 9 01:12:36.095885 kernel: Dynamic Preempt: voluntary May 9 01:12:36.095894 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 01:12:36.095904 kernel: rcu: RCU event tracing is enabled. May 9 01:12:36.095913 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 01:12:36.095921 kernel: Trampoline variant of Tasks RCU enabled. May 9 01:12:36.095930 kernel: Rude variant of Tasks RCU enabled. May 9 01:12:36.095939 kernel: Tracing variant of Tasks RCU enabled. May 9 01:12:36.095950 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 01:12:36.095965 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 01:12:36.095978 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 9 01:12:36.095990 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 01:12:36.095999 kernel: Console: colour VGA+ 80x25 May 9 01:12:36.096008 kernel: printk: console [tty0] enabled May 9 01:12:36.096017 kernel: printk: console [ttyS0] enabled May 9 01:12:36.096025 kernel: ACPI: Core revision 20230628 May 9 01:12:36.096034 kernel: APIC: Switch to symmetric I/O mode setup May 9 01:12:36.096043 kernel: x2apic enabled May 9 01:12:36.096054 kernel: APIC: Switched APIC routing to: physical x2apic May 9 01:12:36.096063 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 01:12:36.096071 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 01:12:36.096080 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 9 01:12:36.096089 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 9 01:12:36.096098 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 9 01:12:36.096106 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 01:12:36.096115 kernel: Spectre V2 : Mitigation: Retpolines May 9 01:12:36.096124 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 01:12:36.096135 kernel: Speculative Store Bypass: Vulnerable May 9 01:12:36.096143 kernel: x86/fpu: x87 FPU will use FXSAVE May 9 01:12:36.096152 kernel: Freeing SMP alternatives memory: 32K May 9 01:12:36.096161 kernel: pid_max: default: 32768 minimum: 301 May 9 01:12:36.096175 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 01:12:36.096186 kernel: landlock: Up and running. May 9 01:12:36.096195 kernel: SELinux: Initializing. May 9 01:12:36.096203 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 01:12:36.096213 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 01:12:36.096222 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 9 01:12:36.096231 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 01:12:36.096241 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 01:12:36.096252 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 01:12:36.096261 kernel: Performance Events: AMD PMU driver. May 9 01:12:36.096270 kernel: ... version: 0 May 9 01:12:36.096279 kernel: ... bit width: 48 May 9 01:12:36.096290 kernel: ... generic registers: 4 May 9 01:12:36.096299 kernel: ... value mask: 0000ffffffffffff May 9 01:12:36.096308 kernel: ... max period: 00007fffffffffff May 9 01:12:36.096317 kernel: ... fixed-purpose events: 0 May 9 01:12:36.096326 kernel: ... event mask: 000000000000000f May 9 01:12:36.096335 kernel: signal: max sigframe size: 1440 May 9 01:12:36.096344 kernel: rcu: Hierarchical SRCU implementation. May 9 01:12:36.096354 kernel: rcu: Max phase no-delay instances is 400. May 9 01:12:36.096363 kernel: smp: Bringing up secondary CPUs ... May 9 01:12:36.096372 kernel: smpboot: x86: Booting SMP configuration: May 9 01:12:36.096383 kernel: .... node #0, CPUs: #1 May 9 01:12:36.096392 kernel: smp: Brought up 1 node, 2 CPUs May 9 01:12:36.096401 kernel: smpboot: Max logical packages: 2 May 9 01:12:36.096410 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 9 01:12:36.096419 kernel: devtmpfs: initialized May 9 01:12:36.096428 kernel: x86/mm: Memory block size: 128MB May 9 01:12:36.096437 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 01:12:36.096446 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 01:12:36.096455 kernel: pinctrl core: initialized pinctrl subsystem May 9 01:12:36.096503 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 01:12:36.096513 kernel: audit: initializing netlink subsys (disabled) May 9 01:12:36.096523 kernel: audit: type=2000 audit(1746753155.524:1): state=initialized audit_enabled=0 res=1 May 9 01:12:36.096531 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 01:12:36.096541 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 01:12:36.096550 kernel: cpuidle: using governor menu May 9 01:12:36.096559 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 01:12:36.096568 kernel: dca service started, version 1.12.1 May 9 01:12:36.096577 kernel: PCI: Using configuration type 1 for base access May 9 01:12:36.096589 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 01:12:36.096598 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 01:12:36.096608 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 01:12:36.096617 kernel: ACPI: Added _OSI(Module Device) May 9 01:12:36.096625 kernel: ACPI: Added _OSI(Processor Device) May 9 01:12:36.096637 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 01:12:36.096646 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 01:12:36.096656 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 01:12:36.096666 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 01:12:36.096677 kernel: ACPI: Interpreter enabled May 9 01:12:36.096688 kernel: ACPI: PM: (supports S0 S3 S5) May 9 01:12:36.096698 kernel: ACPI: Using IOAPIC for interrupt routing May 9 01:12:36.096708 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 01:12:36.096718 kernel: PCI: Using E820 reservations for host bridge windows May 9 01:12:36.096727 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 9 01:12:36.096737 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 01:12:36.096905 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 9 01:12:36.097017 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 9 01:12:36.097115 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 9 01:12:36.097129 kernel: acpiphp: Slot [3] registered May 9 01:12:36.097138 kernel: acpiphp: Slot [4] registered May 9 01:12:36.097148 kernel: acpiphp: Slot [5] registered May 9 01:12:36.097157 kernel: acpiphp: Slot [6] registered May 9 01:12:36.097166 kernel: acpiphp: Slot [7] registered May 9 01:12:36.097174 kernel: acpiphp: Slot [8] registered May 9 01:12:36.097187 kernel: acpiphp: Slot [9] registered May 9 01:12:36.097196 kernel: acpiphp: Slot [10] registered May 9 01:12:36.097205 kernel: acpiphp: Slot [11] registered May 9 01:12:36.097214 kernel: acpiphp: Slot [12] registered May 9 01:12:36.097223 kernel: acpiphp: Slot [13] registered May 9 01:12:36.097231 kernel: acpiphp: Slot [14] registered May 9 01:12:36.097240 kernel: acpiphp: Slot [15] registered May 9 01:12:36.097249 kernel: acpiphp: Slot [16] registered May 9 01:12:36.097258 kernel: acpiphp: Slot [17] registered May 9 01:12:36.097267 kernel: acpiphp: Slot [18] registered May 9 01:12:36.097278 kernel: acpiphp: Slot [19] registered May 9 01:12:36.097286 kernel: acpiphp: Slot [20] registered May 9 01:12:36.097295 kernel: acpiphp: Slot [21] registered May 9 01:12:36.097304 kernel: acpiphp: Slot [22] registered May 9 01:12:36.097313 kernel: acpiphp: Slot [23] registered May 9 01:12:36.097322 kernel: acpiphp: Slot [24] registered May 9 01:12:36.097331 kernel: acpiphp: Slot [25] registered May 9 01:12:36.097340 kernel: acpiphp: Slot [26] registered May 9 01:12:36.097349 kernel: acpiphp: Slot [27] registered May 9 01:12:36.097360 kernel: acpiphp: Slot [28] registered May 9 01:12:36.097369 kernel: acpiphp: Slot [29] registered May 9 01:12:36.097378 kernel: acpiphp: Slot [30] registered May 9 01:12:36.097387 kernel: acpiphp: Slot [31] registered May 9 01:12:36.097396 kernel: PCI host bridge to bus 0000:00 May 9 01:12:36.099553 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 01:12:36.099657 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 01:12:36.099751 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 01:12:36.099849 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 9 01:12:36.099933 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 9 01:12:36.100017 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 01:12:36.100159 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 9 01:12:36.100278 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 9 01:12:36.100383 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 9 01:12:36.100516 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 9 01:12:36.100612 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 9 01:12:36.100706 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 9 01:12:36.100802 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 9 01:12:36.100893 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 9 01:12:36.100996 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 9 01:12:36.101091 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 9 01:12:36.101191 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 9 01:12:36.101296 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 9 01:12:36.101394 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 9 01:12:36.103595 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 9 01:12:36.103710 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 9 01:12:36.103804 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 9 01:12:36.103899 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 01:12:36.104017 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 9 01:12:36.104123 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 9 01:12:36.104242 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 9 01:12:36.104345 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 9 01:12:36.104445 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 9 01:12:36.104792 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 9 01:12:36.104902 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 9 01:12:36.105013 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 9 01:12:36.105115 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 9 01:12:36.105226 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 9 01:12:36.105331 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 9 01:12:36.105433 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 9 01:12:36.106613 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 9 01:12:36.106732 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 9 01:12:36.106837 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 9 01:12:36.106940 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 9 01:12:36.106956 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 01:12:36.106967 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 01:12:36.106977 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 01:12:36.106987 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 01:12:36.106997 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 9 01:12:36.107007 kernel: iommu: Default domain type: Translated May 9 01:12:36.107021 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 01:12:36.107031 kernel: PCI: Using ACPI for IRQ routing May 9 01:12:36.107041 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 01:12:36.107051 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 9 01:12:36.107060 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 9 01:12:36.107161 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 9 01:12:36.107265 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 9 01:12:36.107367 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 01:12:36.107382 kernel: vgaarb: loaded May 9 01:12:36.107396 kernel: clocksource: Switched to clocksource kvm-clock May 9 01:12:36.107406 kernel: VFS: Disk quotas dquot_6.6.0 May 9 01:12:36.107416 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 01:12:36.107426 kernel: pnp: PnP ACPI init May 9 01:12:36.108573 kernel: pnp 00:03: [dma 2] May 9 01:12:36.108595 kernel: pnp: PnP ACPI: found 5 devices May 9 01:12:36.108605 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 01:12:36.108616 kernel: NET: Registered PF_INET protocol family May 9 01:12:36.108630 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 01:12:36.108640 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 01:12:36.108650 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 01:12:36.108661 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 01:12:36.108671 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 01:12:36.108681 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 01:12:36.108691 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 01:12:36.108701 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 01:12:36.108711 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 01:12:36.108723 kernel: NET: Registered PF_XDP protocol family May 9 01:12:36.108819 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 01:12:36.108907 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 01:12:36.108989 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 01:12:36.109071 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 9 01:12:36.109153 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 9 01:12:36.109266 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 9 01:12:36.109366 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 9 01:12:36.109384 kernel: PCI: CLS 0 bytes, default 64 May 9 01:12:36.109393 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 9 01:12:36.109403 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 9 01:12:36.109412 kernel: Initialise system trusted keyrings May 9 01:12:36.109422 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 01:12:36.109431 kernel: Key type asymmetric registered May 9 01:12:36.109440 kernel: Asymmetric key parser 'x509' registered May 9 01:12:36.109450 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 01:12:36.109461 kernel: io scheduler mq-deadline registered May 9 01:12:36.110517 kernel: io scheduler kyber registered May 9 01:12:36.110529 kernel: io scheduler bfq registered May 9 01:12:36.110539 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 01:12:36.110550 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 9 01:12:36.110560 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 9 01:12:36.110570 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 9 01:12:36.110580 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 9 01:12:36.110590 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 01:12:36.110600 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 01:12:36.110614 kernel: random: crng init done May 9 01:12:36.110624 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 01:12:36.110635 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 01:12:36.110645 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 01:12:36.110758 kernel: rtc_cmos 00:04: RTC can wake from S4 May 9 01:12:36.110775 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 01:12:36.110868 kernel: rtc_cmos 00:04: registered as rtc0 May 9 01:12:36.110964 kernel: rtc_cmos 00:04: setting system clock to 2025-05-09T01:12:35 UTC (1746753155) May 9 01:12:36.111061 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 9 01:12:36.111076 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 01:12:36.111086 kernel: NET: Registered PF_INET6 protocol family May 9 01:12:36.111096 kernel: Segment Routing with IPv6 May 9 01:12:36.111106 kernel: In-situ OAM (IOAM) with IPv6 May 9 01:12:36.111116 kernel: NET: Registered PF_PACKET protocol family May 9 01:12:36.111126 kernel: Key type dns_resolver registered May 9 01:12:36.111136 kernel: IPI shorthand broadcast: enabled May 9 01:12:36.111149 kernel: sched_clock: Marking stable (1079007827, 178382159)->(1300547116, -43157130) May 9 01:12:36.111159 kernel: registered taskstats version 1 May 9 01:12:36.111169 kernel: Loading compiled-in X.509 certificates May 9 01:12:36.111179 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 247aefc84589d8961003173d18a9b4daf28f7c9e' May 9 01:12:36.111189 kernel: Key type .fscrypt registered May 9 01:12:36.111199 kernel: Key type fscrypt-provisioning registered May 9 01:12:36.111208 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 01:12:36.111218 kernel: ima: Allocated hash algorithm: sha1 May 9 01:12:36.111229 kernel: ima: No architecture policies found May 9 01:12:36.111240 kernel: clk: Disabling unused clocks May 9 01:12:36.111250 kernel: Freeing unused kernel image (initmem) memory: 43604K May 9 01:12:36.111260 kernel: Write protecting the kernel read-only data: 40960k May 9 01:12:36.111270 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 9 01:12:36.111280 kernel: Run /init as init process May 9 01:12:36.111290 kernel: with arguments: May 9 01:12:36.111299 kernel: /init May 9 01:12:36.111309 kernel: with environment: May 9 01:12:36.111318 kernel: HOME=/ May 9 01:12:36.111330 kernel: TERM=linux May 9 01:12:36.111339 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 01:12:36.111351 systemd[1]: Successfully made /usr/ read-only. May 9 01:12:36.111365 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 9 01:12:36.111377 systemd[1]: Detected virtualization kvm. May 9 01:12:36.111387 systemd[1]: Detected architecture x86-64. May 9 01:12:36.111398 systemd[1]: Running in initrd. May 9 01:12:36.111410 systemd[1]: No hostname configured, using default hostname. May 9 01:12:36.111421 systemd[1]: Hostname set to . May 9 01:12:36.111431 systemd[1]: Initializing machine ID from VM UUID. May 9 01:12:36.111442 systemd[1]: Queued start job for default target initrd.target. May 9 01:12:36.111452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 01:12:36.112516 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 01:12:36.112534 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 01:12:36.112555 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 01:12:36.112568 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 01:12:36.112580 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 01:12:36.112591 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 01:12:36.112602 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 01:12:36.112612 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 01:12:36.112624 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 01:12:36.112635 systemd[1]: Reached target paths.target - Path Units. May 9 01:12:36.112645 systemd[1]: Reached target slices.target - Slice Units. May 9 01:12:36.112655 systemd[1]: Reached target swap.target - Swaps. May 9 01:12:36.112665 systemd[1]: Reached target timers.target - Timer Units. May 9 01:12:36.112675 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 01:12:36.112685 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 01:12:36.112695 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 01:12:36.112707 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 9 01:12:36.112718 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 01:12:36.112738 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 01:12:36.112749 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 01:12:36.112759 systemd[1]: Reached target sockets.target - Socket Units. May 9 01:12:36.112769 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 01:12:36.112779 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 01:12:36.112789 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 01:12:36.112799 systemd[1]: Starting systemd-fsck-usr.service... May 9 01:12:36.112813 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 01:12:36.112823 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 01:12:36.112834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 01:12:36.112844 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 01:12:36.112854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 01:12:36.112892 systemd-journald[183]: Collecting audit messages is disabled. May 9 01:12:36.112921 systemd[1]: Finished systemd-fsck-usr.service. May 9 01:12:36.112932 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 01:12:36.112945 systemd-journald[183]: Journal started May 9 01:12:36.112968 systemd-journald[183]: Runtime Journal (/run/log/journal/4e83be04f12a4ed6b7923060ccdda7d9) is 8M, max 78.2M, 70.2M free. May 9 01:12:36.102300 systemd-modules-load[185]: Inserted module 'overlay' May 9 01:12:36.155630 systemd[1]: Started systemd-journald.service - Journal Service. May 9 01:12:36.155656 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 01:12:36.155672 kernel: Bridge firewalling registered May 9 01:12:36.140564 systemd-modules-load[185]: Inserted module 'br_netfilter' May 9 01:12:36.157550 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 01:12:36.158200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 01:12:36.158888 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 01:12:36.163287 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 01:12:36.167581 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 01:12:36.173299 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 01:12:36.177444 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 01:12:36.191823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 01:12:36.193642 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 01:12:36.194847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 01:12:36.197886 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 01:12:36.199250 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 01:12:36.212272 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 01:12:36.224672 dracut-cmdline[219]: dracut-dracut-053 May 9 01:12:36.227744 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6dbb211661f4d09f7718fdc7eab00f1550a8baafb68f4d2efdaedafa102351ae May 9 01:12:36.262753 systemd-resolved[222]: Positive Trust Anchors: May 9 01:12:36.262771 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 01:12:36.262817 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 01:12:36.265788 systemd-resolved[222]: Defaulting to hostname 'linux'. May 9 01:12:36.267007 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 01:12:36.269135 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 01:12:36.321494 kernel: SCSI subsystem initialized May 9 01:12:36.331496 kernel: Loading iSCSI transport class v2.0-870. May 9 01:12:36.344501 kernel: iscsi: registered transport (tcp) May 9 01:12:36.368302 kernel: iscsi: registered transport (qla4xxx) May 9 01:12:36.368351 kernel: QLogic iSCSI HBA Driver May 9 01:12:36.427400 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 01:12:36.433051 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 01:12:36.499481 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 01:12:36.503306 kernel: device-mapper: uevent: version 1.0.3 May 9 01:12:36.503346 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 01:12:36.564548 kernel: raid6: sse2x4 gen() 5110 MB/s May 9 01:12:36.583535 kernel: raid6: sse2x2 gen() 5966 MB/s May 9 01:12:36.601972 kernel: raid6: sse2x1 gen() 8979 MB/s May 9 01:12:36.602047 kernel: raid6: using algorithm sse2x1 gen() 8979 MB/s May 9 01:12:36.620945 kernel: raid6: .... xor() 7371 MB/s, rmw enabled May 9 01:12:36.621000 kernel: raid6: using ssse3x2 recovery algorithm May 9 01:12:36.643938 kernel: xor: measuring software checksum speed May 9 01:12:36.644009 kernel: prefetch64-sse : 18479 MB/sec May 9 01:12:36.645263 kernel: generic_sse : 16871 MB/sec May 9 01:12:36.645325 kernel: xor: using function: prefetch64-sse (18479 MB/sec) May 9 01:12:36.818533 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 01:12:36.836724 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 01:12:36.841920 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 01:12:36.895101 systemd-udevd[405]: Using default interface naming scheme 'v255'. May 9 01:12:36.907266 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 01:12:36.916717 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 01:12:36.958757 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation May 9 01:12:36.999913 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 01:12:37.002645 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 01:12:37.067286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 01:12:37.070615 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 01:12:37.097320 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 01:12:37.099389 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 01:12:37.101456 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 01:12:37.102685 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 01:12:37.105336 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 01:12:37.127976 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 01:12:37.161250 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 9 01:12:37.165943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 01:12:37.166075 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 01:12:37.166787 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 01:12:37.167317 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 01:12:37.167448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 01:12:37.174501 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 01:12:37.176873 kernel: libata version 3.00 loaded. May 9 01:12:37.177253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 01:12:37.181962 kernel: ata_piix 0000:00:01.1: version 2.13 May 9 01:12:37.181768 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 9 01:12:37.186005 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 9 01:12:37.189503 kernel: scsi host0: ata_piix May 9 01:12:37.189711 kernel: scsi host1: ata_piix May 9 01:12:37.195355 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 01:12:37.195423 kernel: GPT:17805311 != 20971519 May 9 01:12:37.195439 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 9 01:12:37.195454 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 01:12:37.195493 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 9 01:12:37.195508 kernel: GPT:17805311 != 20971519 May 9 01:12:37.202063 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 01:12:37.202117 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 01:12:37.257049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 01:12:37.259439 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 01:12:37.291627 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 01:12:37.388511 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (465) May 9 01:12:37.398515 kernel: BTRFS: device fsid d4537cc2-bda5-4424-8730-1f8e8c76a79a devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (456) May 9 01:12:37.425955 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 01:12:37.452726 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 01:12:37.464588 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 01:12:37.474850 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 01:12:37.477130 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 01:12:37.481577 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 01:12:37.516233 disk-uuid[511]: Primary Header is updated. May 9 01:12:37.516233 disk-uuid[511]: Secondary Entries is updated. May 9 01:12:37.516233 disk-uuid[511]: Secondary Header is updated. May 9 01:12:37.527534 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 01:12:38.549950 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 01:12:38.550007 disk-uuid[512]: The operation has completed successfully. May 9 01:12:38.630206 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 01:12:38.630335 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 01:12:38.667908 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 01:12:38.688030 sh[523]: Success May 9 01:12:38.710559 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 9 01:12:38.782483 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 01:12:38.794632 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 01:12:38.799642 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 01:12:38.846033 kernel: BTRFS info (device dm-0): first mount of filesystem d4537cc2-bda5-4424-8730-1f8e8c76a79a May 9 01:12:38.846131 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 01:12:38.850823 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 01:12:38.854556 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 01:12:38.857190 kernel: BTRFS info (device dm-0): using free space tree May 9 01:12:38.872791 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 01:12:38.875544 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 01:12:38.877340 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 01:12:38.881715 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 01:12:38.908536 kernel: BTRFS info (device vda6): first mount of filesystem 2d988641-706e-44d5-976c-175654fd684c May 9 01:12:38.908629 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 01:12:38.908662 kernel: BTRFS info (device vda6): using free space tree May 9 01:12:38.914626 kernel: BTRFS info (device vda6): auto enabling async discard May 9 01:12:38.927530 kernel: BTRFS info (device vda6): last unmount of filesystem 2d988641-706e-44d5-976c-175654fd684c May 9 01:12:38.938124 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 01:12:38.943355 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 01:12:39.045136 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 01:12:39.049598 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 01:12:39.101250 systemd-networkd[704]: lo: Link UP May 9 01:12:39.101261 systemd-networkd[704]: lo: Gained carrier May 9 01:12:39.106132 systemd-networkd[704]: Enumeration completed May 9 01:12:39.106283 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 01:12:39.106581 systemd-networkd[704]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 01:12:39.106586 systemd-networkd[704]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 01:12:39.107422 systemd-networkd[704]: eth0: Link UP May 9 01:12:39.107426 systemd-networkd[704]: eth0: Gained carrier May 9 01:12:39.107433 systemd-networkd[704]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 01:12:39.107574 systemd[1]: Reached target network.target - Network. May 9 01:12:39.119292 ignition[613]: Ignition 2.20.0 May 9 01:12:39.119309 ignition[613]: Stage: fetch-offline May 9 01:12:39.120537 systemd-networkd[704]: eth0: DHCPv4 address 172.24.4.244/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 9 01:12:39.119347 ignition[613]: no configs at "/usr/lib/ignition/base.d" May 9 01:12:39.121525 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 01:12:39.119357 ignition[613]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:12:39.119505 ignition[613]: parsed url from cmdline: "" May 9 01:12:39.119510 ignition[613]: no config URL provided May 9 01:12:39.119516 ignition[613]: reading system config file "/usr/lib/ignition/user.ign" May 9 01:12:39.119526 ignition[613]: no config at "/usr/lib/ignition/user.ign" May 9 01:12:39.126908 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 01:12:39.119532 ignition[613]: failed to fetch config: resource requires networking May 9 01:12:39.119775 ignition[613]: Ignition finished successfully May 9 01:12:39.151393 ignition[715]: Ignition 2.20.0 May 9 01:12:39.151406 ignition[715]: Stage: fetch May 9 01:12:39.151623 ignition[715]: no configs at "/usr/lib/ignition/base.d" May 9 01:12:39.151636 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:12:39.151751 ignition[715]: parsed url from cmdline: "" May 9 01:12:39.151756 ignition[715]: no config URL provided May 9 01:12:39.151762 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" May 9 01:12:39.151771 ignition[715]: no config at "/usr/lib/ignition/user.ign" May 9 01:12:39.151903 ignition[715]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 9 01:12:39.152142 ignition[715]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 9 01:12:39.152173 ignition[715]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 9 01:12:39.581764 ignition[715]: GET result: OK May 9 01:12:39.581999 ignition[715]: parsing config with SHA512: cc075b1413dac7c1759b2d0bf8d262b59d3a97dea1658dae2b02d9b5db9a28a9921be08d0044df2bf604486e19531e6a37631c5c2c9cc5238ff0daf47bcca8e4 May 9 01:12:39.597763 unknown[715]: fetched base config from "system" May 9 01:12:39.597801 unknown[715]: fetched base config from "system" May 9 01:12:39.597824 unknown[715]: fetched user config from "openstack" May 9 01:12:39.599781 ignition[715]: fetch: fetch complete May 9 01:12:39.599800 ignition[715]: fetch: fetch passed May 9 01:12:39.599925 ignition[715]: Ignition finished successfully May 9 01:12:39.604928 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 01:12:39.611194 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 01:12:39.659311 ignition[721]: Ignition 2.20.0 May 9 01:12:39.659346 ignition[721]: Stage: kargs May 9 01:12:39.660887 ignition[721]: no configs at "/usr/lib/ignition/base.d" May 9 01:12:39.660917 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:12:39.663947 ignition[721]: kargs: kargs passed May 9 01:12:39.664057 ignition[721]: Ignition finished successfully May 9 01:12:39.667060 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 01:12:39.672645 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 01:12:39.712276 ignition[727]: Ignition 2.20.0 May 9 01:12:39.712294 ignition[727]: Stage: disks May 9 01:12:39.712734 ignition[727]: no configs at "/usr/lib/ignition/base.d" May 9 01:12:39.712758 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:12:39.715305 ignition[727]: disks: disks passed May 9 01:12:39.717291 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 01:12:39.715400 ignition[727]: Ignition finished successfully May 9 01:12:39.720148 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 01:12:39.721856 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 01:12:39.724340 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 01:12:39.726656 systemd[1]: Reached target sysinit.target - System Initialization. May 9 01:12:39.729426 systemd[1]: Reached target basic.target - Basic System. May 9 01:12:39.735696 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 01:12:39.780619 systemd-fsck[735]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 9 01:12:39.791656 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 01:12:39.796984 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 01:12:39.954513 kernel: EXT4-fs (vda9): mounted filesystem 0829e1d9-eacd-4a94-9591-6f579c115eeb r/w with ordered data mode. Quota mode: none. May 9 01:12:39.955789 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 01:12:39.956867 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 01:12:39.961021 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 01:12:39.964545 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 01:12:39.965256 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 01:12:39.967327 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 9 01:12:39.970527 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 01:12:39.971524 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 01:12:39.977180 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 01:12:39.980584 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 01:12:39.994944 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (743) May 9 01:12:40.015136 kernel: BTRFS info (device vda6): first mount of filesystem 2d988641-706e-44d5-976c-175654fd684c May 9 01:12:40.015215 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 01:12:40.015246 kernel: BTRFS info (device vda6): using free space tree May 9 01:12:40.015275 kernel: BTRFS info (device vda6): auto enabling async discard May 9 01:12:40.020270 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 01:12:40.137948 initrd-setup-root[772]: cut: /sysroot/etc/passwd: No such file or directory May 9 01:12:40.145514 initrd-setup-root[779]: cut: /sysroot/etc/group: No such file or directory May 9 01:12:40.153452 initrd-setup-root[786]: cut: /sysroot/etc/shadow: No such file or directory May 9 01:12:40.162764 initrd-setup-root[793]: cut: /sysroot/etc/gshadow: No such file or directory May 9 01:12:40.279410 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 01:12:40.285289 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 01:12:40.289714 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 01:12:40.302418 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 01:12:40.307433 kernel: BTRFS info (device vda6): last unmount of filesystem 2d988641-706e-44d5-976c-175654fd684c May 9 01:12:40.347595 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 01:12:40.358494 ignition[860]: INFO : Ignition 2.20.0 May 9 01:12:40.358494 ignition[860]: INFO : Stage: mount May 9 01:12:40.359957 ignition[860]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 01:12:40.359957 ignition[860]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:12:40.361311 ignition[860]: INFO : mount: mount passed May 9 01:12:40.361311 ignition[860]: INFO : Ignition finished successfully May 9 01:12:40.362980 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 01:12:40.706017 systemd-networkd[704]: eth0: Gained IPv6LL May 9 01:12:47.184284 coreos-metadata[745]: May 09 01:12:47.184 WARN failed to locate config-drive, using the metadata service API instead May 9 01:12:47.225789 coreos-metadata[745]: May 09 01:12:47.225 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 9 01:12:47.241398 coreos-metadata[745]: May 09 01:12:47.241 INFO Fetch successful May 9 01:12:47.241398 coreos-metadata[745]: May 09 01:12:47.241 INFO wrote hostname ci-4284-0-0-n-58e4f3488e.novalocal to /sysroot/etc/hostname May 9 01:12:47.246665 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 9 01:12:47.246906 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 9 01:12:47.254691 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 01:12:47.289102 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 01:12:47.327519 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (878) May 9 01:12:47.328370 kernel: BTRFS info (device vda6): first mount of filesystem 2d988641-706e-44d5-976c-175654fd684c May 9 01:12:47.332449 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 01:12:47.336725 kernel: BTRFS info (device vda6): using free space tree May 9 01:12:47.349565 kernel: BTRFS info (device vda6): auto enabling async discard May 9 01:12:47.357329 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 01:12:47.404722 ignition[896]: INFO : Ignition 2.20.0 May 9 01:12:47.404722 ignition[896]: INFO : Stage: files May 9 01:12:47.407595 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 01:12:47.407595 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:12:47.411393 ignition[896]: DEBUG : files: compiled without relabeling support, skipping May 9 01:12:47.411393 ignition[896]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 01:12:47.411393 ignition[896]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 01:12:47.417296 ignition[896]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 01:12:47.417296 ignition[896]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 01:12:47.421464 ignition[896]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 01:12:47.419766 unknown[896]: wrote ssh authorized keys file for user: core May 9 01:12:47.425148 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 01:12:47.425148 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 9 01:12:47.490707 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 01:12:47.935198 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 01:12:47.935198 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 01:12:47.940675 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 9 01:12:48.613301 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 9 01:12:50.160813 ignition[896]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 01:12:50.160813 ignition[896]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 9 01:12:50.166705 ignition[896]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 01:12:50.166705 ignition[896]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 01:12:50.166705 ignition[896]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 9 01:12:50.166705 ignition[896]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 9 01:12:50.166705 ignition[896]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 9 01:12:50.166705 ignition[896]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 01:12:50.166705 ignition[896]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 01:12:50.166705 ignition[896]: INFO : files: files passed May 9 01:12:50.166705 ignition[896]: INFO : Ignition finished successfully May 9 01:12:50.166828 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 01:12:50.173662 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 01:12:50.178968 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 01:12:50.190691 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 01:12:50.191362 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 01:12:50.205393 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 01:12:50.206422 initrd-setup-root-after-ignition[926]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 01:12:50.206422 initrd-setup-root-after-ignition[926]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 01:12:50.208838 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 01:12:50.211994 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 01:12:50.215165 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 01:12:50.270366 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 01:12:50.270612 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 01:12:50.274038 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 01:12:50.279895 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 01:12:50.280449 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 01:12:50.282632 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 01:12:50.312853 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 01:12:50.317820 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 01:12:50.348220 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 01:12:50.349956 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 01:12:50.352952 systemd[1]: Stopped target timers.target - Timer Units. May 9 01:12:50.355687 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 01:12:50.355991 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 01:12:50.358923 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 01:12:50.360683 systemd[1]: Stopped target basic.target - Basic System. May 9 01:12:50.363429 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 01:12:50.365861 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 01:12:50.368261 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 01:12:50.371096 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 01:12:50.374529 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 01:12:50.377451 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 01:12:50.380693 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 01:12:50.382418 systemd[1]: Stopped target swap.target - Swaps. May 9 01:12:50.385265 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 01:12:50.385716 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 01:12:50.389007 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 01:12:50.391147 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 01:12:50.393520 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 01:12:50.393863 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 01:12:50.398007 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 01:12:50.398323 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 01:12:50.401145 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 01:12:50.401459 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 01:12:50.403299 systemd[1]: ignition-files.service: Deactivated successfully. May 9 01:12:50.403753 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 01:12:50.409928 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 01:12:50.415308 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 01:12:50.418075 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 01:12:50.419633 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 01:12:50.420540 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 01:12:50.420698 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 01:12:50.430882 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 01:12:50.431589 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 01:12:50.445726 ignition[950]: INFO : Ignition 2.20.0 May 9 01:12:50.445726 ignition[950]: INFO : Stage: umount May 9 01:12:50.448014 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 01:12:50.448014 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:12:50.450338 ignition[950]: INFO : umount: umount passed May 9 01:12:50.450338 ignition[950]: INFO : Ignition finished successfully May 9 01:12:50.450530 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 01:12:50.450645 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 01:12:50.451893 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 01:12:50.451975 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 01:12:50.452877 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 01:12:50.452924 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 01:12:50.453902 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 01:12:50.453943 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 01:12:50.454991 systemd[1]: Stopped target network.target - Network. May 9 01:12:50.456001 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 01:12:50.456050 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 01:12:50.457073 systemd[1]: Stopped target paths.target - Path Units. May 9 01:12:50.458014 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 01:12:50.458057 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 01:12:50.459137 systemd[1]: Stopped target slices.target - Slice Units. May 9 01:12:50.460266 systemd[1]: Stopped target sockets.target - Socket Units. May 9 01:12:50.461277 systemd[1]: iscsid.socket: Deactivated successfully. May 9 01:12:50.461313 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 01:12:50.462259 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 01:12:50.462288 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 01:12:50.463444 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 01:12:50.463516 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 01:12:50.466274 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 01:12:50.466314 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 01:12:50.468099 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 01:12:50.468660 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 01:12:50.476370 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 01:12:50.476493 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 01:12:50.480592 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 9 01:12:50.480835 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 01:12:50.480940 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 01:12:50.483197 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 9 01:12:50.484024 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 01:12:50.484222 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 01:12:50.486598 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 01:12:50.487120 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 01:12:50.487167 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 01:12:50.489562 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 01:12:50.489618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 01:12:50.491564 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 01:12:50.491607 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 01:12:50.496757 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 01:12:50.496801 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 01:12:50.499446 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 01:12:50.502704 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 9 01:12:50.502764 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 9 01:12:50.505810 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 01:12:50.505962 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 01:12:50.507805 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 01:12:50.507858 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 01:12:50.510885 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 01:12:50.510924 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 01:12:50.511434 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 01:12:50.511493 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 01:12:50.512033 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 01:12:50.512074 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 01:12:50.513415 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 01:12:50.513459 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 01:12:50.518664 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 01:12:50.519168 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 01:12:50.519216 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 01:12:50.523060 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 01:12:50.523124 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 01:12:50.524159 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 01:12:50.524202 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 01:12:50.525368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 01:12:50.525413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 01:12:50.527967 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 01:12:50.528052 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 9 01:12:50.528101 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 9 01:12:50.528746 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 01:12:50.528850 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 01:12:50.536752 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 01:12:50.536837 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 01:12:50.731593 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 01:12:50.731835 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 01:12:50.735280 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 01:12:50.737068 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 01:12:50.737194 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 01:12:50.741732 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 01:12:50.770870 systemd[1]: Switching root. May 9 01:12:50.815611 systemd-journald[183]: Journal stopped May 9 01:12:52.338907 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). May 9 01:12:52.338971 kernel: SELinux: policy capability network_peer_controls=1 May 9 01:12:52.338991 kernel: SELinux: policy capability open_perms=1 May 9 01:12:52.339003 kernel: SELinux: policy capability extended_socket_class=1 May 9 01:12:52.339020 kernel: SELinux: policy capability always_check_network=0 May 9 01:12:52.339034 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 01:12:52.339046 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 01:12:52.339062 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 01:12:52.339073 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 01:12:52.339084 kernel: audit: type=1403 audit(1746753171.249:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 01:12:52.339097 systemd[1]: Successfully loaded SELinux policy in 60.009ms. May 9 01:12:52.339120 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.955ms. May 9 01:12:52.340507 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 9 01:12:52.340527 systemd[1]: Detected virtualization kvm. May 9 01:12:52.340544 systemd[1]: Detected architecture x86-64. May 9 01:12:52.340556 systemd[1]: Detected first boot. May 9 01:12:52.340571 systemd[1]: Hostname set to . May 9 01:12:52.340583 systemd[1]: Initializing machine ID from VM UUID. May 9 01:12:52.340595 zram_generator::config[996]: No configuration found. May 9 01:12:52.340614 kernel: Guest personality initialized and is inactive May 9 01:12:52.340625 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 9 01:12:52.340639 kernel: Initialized host personality May 9 01:12:52.340650 kernel: NET: Registered PF_VSOCK protocol family May 9 01:12:52.340662 systemd[1]: Populated /etc with preset unit settings. May 9 01:12:52.340675 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 9 01:12:52.340688 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 01:12:52.340700 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 01:12:52.340712 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 01:12:52.340724 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 01:12:52.340742 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 01:12:52.340756 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 01:12:52.340768 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 01:12:52.340780 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 01:12:52.340794 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 01:12:52.340806 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 01:12:52.340818 systemd[1]: Created slice user.slice - User and Session Slice. May 9 01:12:52.340830 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 01:12:52.340844 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 01:12:52.340856 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 01:12:52.340871 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 01:12:52.340884 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 01:12:52.340896 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 01:12:52.340909 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 01:12:52.340922 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 01:12:52.340934 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 01:12:52.340948 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 01:12:52.340961 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 01:12:52.340973 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 01:12:52.340986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 01:12:52.340998 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 01:12:52.341010 systemd[1]: Reached target slices.target - Slice Units. May 9 01:12:52.341022 systemd[1]: Reached target swap.target - Swaps. May 9 01:12:52.341034 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 01:12:52.341046 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 01:12:52.341061 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 9 01:12:52.341073 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 01:12:52.341085 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 01:12:52.341098 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 01:12:52.341112 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 01:12:52.341125 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 01:12:52.341137 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 01:12:52.341149 systemd[1]: Mounting media.mount - External Media Directory... May 9 01:12:52.341161 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 01:12:52.341175 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 01:12:52.341188 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 01:12:52.341200 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 01:12:52.341213 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 01:12:52.341225 systemd[1]: Reached target machines.target - Containers. May 9 01:12:52.341237 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 01:12:52.341249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 01:12:52.341262 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 01:12:52.341276 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 01:12:52.341288 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 01:12:52.341300 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 01:12:52.341312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 01:12:52.341325 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 01:12:52.341337 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 01:12:52.341349 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 01:12:52.341362 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 01:12:52.341375 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 01:12:52.341389 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 01:12:52.341401 systemd[1]: Stopped systemd-fsck-usr.service. May 9 01:12:52.341413 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 01:12:52.341426 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 01:12:52.341438 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 01:12:52.341449 kernel: fuse: init (API version 7.39) May 9 01:12:52.341461 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 01:12:52.343624 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 01:12:52.343644 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 9 01:12:52.343657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 01:12:52.343669 kernel: ACPI: bus type drm_connector registered May 9 01:12:52.343681 systemd[1]: verity-setup.service: Deactivated successfully. May 9 01:12:52.343694 systemd[1]: Stopped verity-setup.service. May 9 01:12:52.343709 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 01:12:52.343722 kernel: loop: module loaded May 9 01:12:52.343734 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 01:12:52.343747 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 01:12:52.343759 systemd[1]: Mounted media.mount - External Media Directory. May 9 01:12:52.343773 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 01:12:52.343803 systemd-journald[1088]: Collecting audit messages is disabled. May 9 01:12:52.343830 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 01:12:52.343844 systemd-journald[1088]: Journal started May 9 01:12:52.343869 systemd-journald[1088]: Runtime Journal (/run/log/journal/4e83be04f12a4ed6b7923060ccdda7d9) is 8M, max 78.2M, 70.2M free. May 9 01:12:51.973673 systemd[1]: Queued start job for default target multi-user.target. May 9 01:12:51.981768 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 01:12:51.982271 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 01:12:52.348545 systemd[1]: Started systemd-journald.service - Journal Service. May 9 01:12:52.349190 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 01:12:52.350084 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 01:12:52.350948 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 01:12:52.351123 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 01:12:52.352799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 01:12:52.352954 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 01:12:52.353676 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 01:12:52.353828 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 01:12:52.355672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 01:12:52.355851 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 01:12:52.356753 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 01:12:52.357532 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 01:12:52.358256 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 01:12:52.358416 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 01:12:52.361099 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 01:12:52.361945 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 01:12:52.366329 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 01:12:52.378735 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 9 01:12:52.384148 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 01:12:52.385364 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 01:12:52.390571 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 01:12:52.396337 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 01:12:52.401550 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 01:12:52.401593 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 01:12:52.403437 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 9 01:12:52.407607 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 01:12:52.409547 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 01:12:52.410224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 01:12:52.413699 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 01:12:52.418533 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 01:12:52.419185 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 01:12:52.422135 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 01:12:52.423591 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 01:12:52.425621 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 01:12:52.428692 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 01:12:52.430758 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 01:12:52.437144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 01:12:52.437928 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 01:12:52.439761 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 01:12:52.440538 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 01:12:52.453204 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 01:12:52.461092 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 01:12:52.461927 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 01:12:52.464582 systemd-journald[1088]: Time spent on flushing to /var/log/journal/4e83be04f12a4ed6b7923060ccdda7d9 is 40.279ms for 963 entries. May 9 01:12:52.464582 systemd-journald[1088]: System Journal (/var/log/journal/4e83be04f12a4ed6b7923060ccdda7d9) is 8M, max 584.8M, 576.8M free. May 9 01:12:52.562836 systemd-journald[1088]: Received client request to flush runtime journal. May 9 01:12:52.562935 kernel: loop0: detected capacity change from 0 to 210664 May 9 01:12:52.466814 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 9 01:12:52.495730 udevadm[1141]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 01:12:52.539747 systemd-tmpfiles[1136]: ACLs are not supported, ignoring. May 9 01:12:52.539762 systemd-tmpfiles[1136]: ACLs are not supported, ignoring. May 9 01:12:52.545623 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 01:12:52.549524 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 01:12:52.552593 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 01:12:52.564628 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 01:12:52.584396 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 9 01:12:52.601489 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 01:12:52.615579 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 01:12:52.621427 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 01:12:52.641500 kernel: loop1: detected capacity change from 0 to 8 May 9 01:12:52.649025 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. May 9 01:12:52.649306 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. May 9 01:12:52.655560 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 01:12:52.665494 kernel: loop2: detected capacity change from 0 to 109808 May 9 01:12:52.713511 kernel: loop3: detected capacity change from 0 to 151640 May 9 01:12:52.803163 kernel: loop4: detected capacity change from 0 to 210664 May 9 01:12:52.859762 kernel: loop5: detected capacity change from 0 to 8 May 9 01:12:52.864497 kernel: loop6: detected capacity change from 0 to 109808 May 9 01:12:52.908529 kernel: loop7: detected capacity change from 0 to 151640 May 9 01:12:52.985702 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 01:12:53.009179 (sd-merge)[1165]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 9 01:12:53.010298 (sd-merge)[1165]: Merged extensions into '/usr'. May 9 01:12:53.024331 systemd[1]: Reload requested from client PID 1135 ('systemd-sysext') (unit systemd-sysext.service)... May 9 01:12:53.024361 systemd[1]: Reloading... May 9 01:12:53.172602 zram_generator::config[1191]: No configuration found. May 9 01:12:53.376929 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 01:12:53.423773 ldconfig[1130]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 01:12:53.460801 systemd[1]: Reloading finished in 435 ms. May 9 01:12:53.479765 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 01:12:53.480993 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 01:12:53.482349 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 01:12:53.490748 systemd[1]: Starting ensure-sysext.service... May 9 01:12:53.494230 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 01:12:53.497591 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 01:12:53.531446 systemd[1]: Reload requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... May 9 01:12:53.531484 systemd[1]: Reloading... May 9 01:12:53.544137 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 01:12:53.545681 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 01:12:53.548229 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 01:12:53.549127 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. May 9 01:12:53.549420 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. May 9 01:12:53.563761 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. May 9 01:12:53.564533 systemd-tmpfiles[1251]: Skipping /boot May 9 01:12:53.593405 systemd-udevd[1252]: Using default interface naming scheme 'v255'. May 9 01:12:53.596439 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. May 9 01:12:53.596455 systemd-tmpfiles[1251]: Skipping /boot May 9 01:12:53.630492 zram_generator::config[1279]: No configuration found. May 9 01:12:53.760490 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1307) May 9 01:12:53.799107 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 9 01:12:53.837529 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 9 01:12:53.858795 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 9 01:12:53.869506 kernel: ACPI: button: Power Button [PWRF] May 9 01:12:53.904116 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 01:12:53.939505 kernel: mousedev: PS/2 mouse device common for all mice May 9 01:12:53.946810 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 9 01:12:53.946852 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 9 01:12:53.950875 kernel: Console: switching to colour dummy device 80x25 May 9 01:12:53.953165 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 9 01:12:53.953193 kernel: [drm] features: -context_init May 9 01:12:53.959764 kernel: [drm] number of scanouts: 1 May 9 01:12:53.959823 kernel: [drm] number of cap sets: 0 May 9 01:12:53.963501 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 9 01:12:53.975944 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 9 01:12:53.976006 kernel: Console: switching to colour frame buffer device 160x50 May 9 01:12:53.985497 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 9 01:12:54.045375 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 01:12:54.049066 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 01:12:54.049176 systemd[1]: Reloading finished in 517 ms. May 9 01:12:54.066184 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 01:12:54.072670 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 01:12:54.128117 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 01:12:54.130194 systemd[1]: Finished ensure-sysext.service. May 9 01:12:54.151190 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 01:12:54.154005 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 01:12:54.169715 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 01:12:54.170259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 01:12:54.173625 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 01:12:54.182302 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 01:12:54.187327 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 01:12:54.208501 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 01:12:54.220729 lvm[1374]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 01:12:54.226704 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 01:12:54.228194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 01:12:54.231628 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 01:12:54.231756 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 01:12:54.235647 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 01:12:54.239616 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 01:12:54.247646 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 01:12:54.259726 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 01:12:54.269970 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 01:12:54.274660 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 01:12:54.274777 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 01:12:54.276998 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 01:12:54.277360 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 01:12:54.277734 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 01:12:54.280533 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 01:12:54.281776 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 01:12:54.285139 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 01:12:54.285410 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 01:12:54.289255 augenrules[1405]: No rules May 9 01:12:54.294031 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 01:12:54.294737 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 01:12:54.297787 systemd[1]: audit-rules.service: Deactivated successfully. May 9 01:12:54.297995 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 01:12:54.301152 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 01:12:54.319797 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 01:12:54.325859 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 01:12:54.329612 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 01:12:54.329835 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 01:12:54.333889 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 01:12:54.337200 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 01:12:54.344930 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 01:12:54.355629 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 01:12:54.363127 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 01:12:54.390540 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 01:12:54.393037 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 01:12:54.396183 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 01:12:54.399714 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 01:12:54.411449 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 01:12:54.450653 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 01:12:54.508258 systemd-networkd[1396]: lo: Link UP May 9 01:12:54.508271 systemd-networkd[1396]: lo: Gained carrier May 9 01:12:54.509676 systemd-networkd[1396]: Enumeration completed May 9 01:12:54.509783 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 01:12:54.516610 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 01:12:54.516624 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 01:12:54.517638 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 9 01:12:54.522825 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 01:12:54.524758 systemd-networkd[1396]: eth0: Link UP May 9 01:12:54.524763 systemd-networkd[1396]: eth0: Gained carrier May 9 01:12:54.524790 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 01:12:54.535546 systemd-networkd[1396]: eth0: DHCPv4 address 172.24.4.244/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 9 01:12:54.550072 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 01:12:54.551140 systemd[1]: Reached target time-set.target - System Time Set. May 9 01:12:54.559496 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 9 01:12:54.564151 systemd-resolved[1397]: Positive Trust Anchors: May 9 01:12:54.564426 systemd-resolved[1397]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 01:12:54.564568 systemd-resolved[1397]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 01:12:54.570505 systemd-resolved[1397]: Using system hostname 'ci-4284-0-0-n-58e4f3488e.novalocal'. May 9 01:12:54.572121 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 01:12:54.572748 systemd[1]: Reached target network.target - Network. May 9 01:12:54.573218 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 01:12:54.573706 systemd[1]: Reached target sysinit.target - System Initialization. May 9 01:12:54.574276 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 01:12:54.577789 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 01:12:54.580005 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 01:12:54.582135 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 01:12:54.584313 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 01:12:54.586396 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 01:12:54.586552 systemd[1]: Reached target paths.target - Path Units. May 9 01:12:54.588757 systemd[1]: Reached target timers.target - Timer Units. May 9 01:12:54.594237 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 01:12:54.600264 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 01:12:54.604802 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 9 01:12:54.609289 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 9 01:12:54.612925 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 9 01:12:54.623243 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 01:12:54.624444 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 9 01:12:54.626897 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 01:12:54.629849 systemd[1]: Reached target sockets.target - Socket Units. May 9 01:12:54.631934 systemd[1]: Reached target basic.target - Basic System. May 9 01:12:54.634012 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 01:12:54.634120 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 01:12:54.636439 systemd[1]: Starting containerd.service - containerd container runtime... May 9 01:12:54.641636 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 9 01:12:54.658610 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 01:12:54.661725 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 01:12:54.669995 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 01:12:54.670663 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 01:12:54.675388 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 01:12:54.681732 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 01:12:55.707142 systemd-timesyncd[1399]: Contacted time server 23.150.41.122:123 (0.flatcar.pool.ntp.org). May 9 01:12:55.707159 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 01:12:55.714656 jq[1449]: false May 9 01:12:55.707241 systemd-timesyncd[1399]: Initial clock synchronization to Fri 2025-05-09 01:12:55.705169 UTC. May 9 01:12:55.709058 systemd-resolved[1397]: Clock change detected. Flushing caches. May 9 01:12:55.718586 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 01:12:55.728141 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 01:12:55.740507 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 01:12:55.741265 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 01:12:55.743345 systemd[1]: Starting update-engine.service - Update Engine... May 9 01:12:55.754234 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 01:12:55.764237 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 01:12:55.765261 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 01:12:55.770364 systemd[1]: motdgen.service: Deactivated successfully. May 9 01:12:55.771318 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 01:12:55.774307 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 01:12:55.775952 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 01:12:55.795311 extend-filesystems[1450]: Found loop4 May 9 01:12:55.804779 extend-filesystems[1450]: Found loop5 May 9 01:12:55.804779 extend-filesystems[1450]: Found loop6 May 9 01:12:55.804779 extend-filesystems[1450]: Found loop7 May 9 01:12:55.804779 extend-filesystems[1450]: Found vda May 9 01:12:55.804779 extend-filesystems[1450]: Found vda1 May 9 01:12:55.804779 extend-filesystems[1450]: Found vda2 May 9 01:12:55.804779 extend-filesystems[1450]: Found vda3 May 9 01:12:55.804779 extend-filesystems[1450]: Found usr May 9 01:12:55.804779 extend-filesystems[1450]: Found vda4 May 9 01:12:55.804779 extend-filesystems[1450]: Found vda6 May 9 01:12:55.804779 extend-filesystems[1450]: Found vda7 May 9 01:12:55.804779 extend-filesystems[1450]: Found vda9 May 9 01:12:55.804779 extend-filesystems[1450]: Checking size of /dev/vda9 May 9 01:12:55.882444 update_engine[1461]: I20250509 01:12:55.823923 1461 main.cc:92] Flatcar Update Engine starting May 9 01:12:55.882444 update_engine[1461]: I20250509 01:12:55.876389 1461 update_check_scheduler.cc:74] Next update check in 4m37s May 9 01:12:55.841694 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 01:12:55.841490 dbus-daemon[1446]: [system] SELinux support is enabled May 9 01:12:55.882894 tar[1469]: linux-amd64/helm May 9 01:12:55.850533 systemd-logind[1455]: New seat seat0. May 9 01:12:55.883251 jq[1467]: true May 9 01:12:55.858108 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) May 9 01:12:55.858130 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 01:12:55.860816 systemd[1]: Started systemd-logind.service - User Login Management. May 9 01:12:55.868288 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 01:12:55.868315 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 01:12:55.877722 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 01:12:55.877743 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 01:12:55.878259 systemd[1]: Started update-engine.service - Update Engine. May 9 01:12:55.885252 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 01:12:55.894066 extend-filesystems[1450]: Resized partition /dev/vda9 May 9 01:12:55.892961 dbus-daemon[1446]: [system] Successfully activated service 'org.freedesktop.systemd1' May 9 01:12:55.894827 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 01:12:55.917597 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 9 01:12:55.917644 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 9 01:12:55.917681 extend-filesystems[1486]: resize2fs 1.47.2 (1-Jan-2025) May 9 01:12:56.049427 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1297) May 9 01:12:55.954373 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 01:12:56.049673 jq[1480]: true May 9 01:12:56.055189 extend-filesystems[1486]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 01:12:56.055189 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 01:12:56.055189 extend-filesystems[1486]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 9 01:12:56.069365 extend-filesystems[1450]: Resized filesystem in /dev/vda9 May 9 01:12:56.056660 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 01:12:56.056875 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 01:12:56.103025 bash[1503]: Updated "/home/core/.ssh/authorized_keys" May 9 01:12:56.104028 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 01:12:56.117257 systemd[1]: Starting sshkeys.service... May 9 01:12:56.163699 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 9 01:12:56.174319 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 9 01:12:56.283454 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 01:12:56.373323 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 01:12:56.399275 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 01:12:56.406281 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 01:12:56.411602 systemd[1]: Started sshd@0-172.24.4.244:22-172.24.4.1:53588.service - OpenSSH per-connection server daemon (172.24.4.1:53588). May 9 01:12:56.430458 containerd[1478]: time="2025-05-09T01:12:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 9 01:12:56.431355 containerd[1478]: time="2025-05-09T01:12:56.430994490Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 9 01:12:56.451377 systemd[1]: issuegen.service: Deactivated successfully. May 9 01:12:56.451594 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 01:12:56.463304 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 01:12:56.468056 containerd[1478]: time="2025-05-09T01:12:56.467785772Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.383µs" May 9 01:12:56.468056 containerd[1478]: time="2025-05-09T01:12:56.467836186Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 9 01:12:56.468056 containerd[1478]: time="2025-05-09T01:12:56.467863087Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 9 01:12:56.468390 containerd[1478]: time="2025-05-09T01:12:56.468239964Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 9 01:12:56.469037 containerd[1478]: time="2025-05-09T01:12:56.469016520Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 9 01:12:56.469138 containerd[1478]: time="2025-05-09T01:12:56.469121196Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 9 01:12:56.469259 containerd[1478]: time="2025-05-09T01:12:56.469240129Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 9 01:12:56.469318 containerd[1478]: time="2025-05-09T01:12:56.469304650Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 9 01:12:56.469645 containerd[1478]: time="2025-05-09T01:12:56.469622727Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 9 01:12:56.471922 containerd[1478]: time="2025-05-09T01:12:56.471020919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 9 01:12:56.471922 containerd[1478]: time="2025-05-09T01:12:56.471048250Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 9 01:12:56.471922 containerd[1478]: time="2025-05-09T01:12:56.471060293Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 9 01:12:56.471922 containerd[1478]: time="2025-05-09T01:12:56.471147306Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 9 01:12:56.471922 containerd[1478]: time="2025-05-09T01:12:56.471368521Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 9 01:12:56.471922 containerd[1478]: time="2025-05-09T01:12:56.471400351Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 9 01:12:56.471922 containerd[1478]: time="2025-05-09T01:12:56.471412654Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 9 01:12:56.471922 containerd[1478]: time="2025-05-09T01:12:56.471451066Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 9 01:12:56.471922 containerd[1478]: time="2025-05-09T01:12:56.471732454Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 9 01:12:56.471922 containerd[1478]: time="2025-05-09T01:12:56.471794280Z" level=info msg="metadata content store policy set" policy=shared May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484043814Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484127691Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484149492Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484168287Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484183205Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484197001Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484215315Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484248337Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484261682Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484273424Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484284144Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484297770Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484423566Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 9 01:12:56.485011 containerd[1478]: time="2025-05-09T01:12:56.484447411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484461166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484483067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484496473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484509657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484522551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484534294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484546807Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484559811Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484571213Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484639621Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484655691Z" level=info msg="Start snapshots syncer" May 9 01:12:56.485405 containerd[1478]: time="2025-05-09T01:12:56.484684445Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 9 01:12:56.486801 containerd[1478]: time="2025-05-09T01:12:56.486257706Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 9 01:12:56.486801 containerd[1478]: time="2025-05-09T01:12:56.486664739Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 9 01:12:56.486956 containerd[1478]: time="2025-05-09T01:12:56.486923995Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487287177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487344103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487361436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487374230Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487537195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487552224Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487565078Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487797474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487826809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487840685Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487899034Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487919903Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 9 01:12:56.488656 containerd[1478]: time="2025-05-09T01:12:56.487936905Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 9 01:12:56.488954 containerd[1478]: time="2025-05-09T01:12:56.488266363Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 9 01:12:56.488954 containerd[1478]: time="2025-05-09T01:12:56.488279187Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 9 01:12:56.488954 containerd[1478]: time="2025-05-09T01:12:56.488290849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 9 01:12:56.488954 containerd[1478]: time="2025-05-09T01:12:56.488303783Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 9 01:12:56.488954 containerd[1478]: time="2025-05-09T01:12:56.488321787Z" level=info msg="runtime interface created" May 9 01:12:56.488954 containerd[1478]: time="2025-05-09T01:12:56.488328730Z" level=info msg="created NRI interface" May 9 01:12:56.488954 containerd[1478]: time="2025-05-09T01:12:56.488340212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 9 01:12:56.488954 containerd[1478]: time="2025-05-09T01:12:56.488352795Z" level=info msg="Connect containerd service" May 9 01:12:56.488954 containerd[1478]: time="2025-05-09T01:12:56.488379926Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 01:12:56.491626 containerd[1478]: time="2025-05-09T01:12:56.491572263Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 01:12:56.492177 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 01:12:56.501198 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 01:12:56.507276 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 01:12:56.510335 systemd[1]: Reached target getty.target - Login Prompts. May 9 01:12:56.703589 containerd[1478]: time="2025-05-09T01:12:56.703452665Z" level=info msg="Start subscribing containerd event" May 9 01:12:56.703589 containerd[1478]: time="2025-05-09T01:12:56.703545349Z" level=info msg="Start recovering state" May 9 01:12:56.703737 containerd[1478]: time="2025-05-09T01:12:56.703698947Z" level=info msg="Start event monitor" May 9 01:12:56.703737 containerd[1478]: time="2025-05-09T01:12:56.703721028Z" level=info msg="Start cni network conf syncer for default" May 9 01:12:56.703737 containerd[1478]: time="2025-05-09T01:12:56.703729825Z" level=info msg="Start streaming server" May 9 01:12:56.703815 containerd[1478]: time="2025-05-09T01:12:56.703744472Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 9 01:12:56.703815 containerd[1478]: time="2025-05-09T01:12:56.703754511Z" level=info msg="runtime interface starting up..." May 9 01:12:56.703815 containerd[1478]: time="2025-05-09T01:12:56.703762025Z" level=info msg="starting plugins..." May 9 01:12:56.703815 containerd[1478]: time="2025-05-09T01:12:56.703782313Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 9 01:12:56.704417 containerd[1478]: time="2025-05-09T01:12:56.704108535Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 01:12:56.704417 containerd[1478]: time="2025-05-09T01:12:56.704170391Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 01:12:56.707809 containerd[1478]: time="2025-05-09T01:12:56.707274964Z" level=info msg="containerd successfully booted in 0.277154s" May 9 01:12:56.708781 systemd[1]: Started containerd.service - containerd container runtime. May 9 01:12:56.723184 tar[1469]: linux-amd64/LICENSE May 9 01:12:56.723184 tar[1469]: linux-amd64/README.md May 9 01:12:56.735730 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 01:12:57.337150 systemd-networkd[1396]: eth0: Gained IPv6LL May 9 01:12:57.340378 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 01:12:57.346446 systemd[1]: Reached target network-online.target - Network is Online. May 9 01:12:57.354009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:12:57.374411 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 01:12:57.442371 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 01:12:57.486946 sshd[1530]: Accepted publickey for core from 172.24.4.1 port 53588 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:12:57.492471 sshd-session[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:12:57.524549 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 01:12:57.533274 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 01:12:57.542006 systemd-logind[1455]: New session 1 of user core. May 9 01:12:57.567432 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 01:12:57.573873 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 01:12:57.589611 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 01:12:57.592512 systemd-logind[1455]: New session c1 of user core. May 9 01:12:57.759399 systemd[1573]: Queued start job for default target default.target. May 9 01:12:57.765422 systemd[1573]: Created slice app.slice - User Application Slice. May 9 01:12:57.765450 systemd[1573]: Reached target paths.target - Paths. May 9 01:12:57.765494 systemd[1573]: Reached target timers.target - Timers. May 9 01:12:57.769468 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 01:12:57.808171 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 01:12:57.808297 systemd[1573]: Reached target sockets.target - Sockets. May 9 01:12:57.808337 systemd[1573]: Reached target basic.target - Basic System. May 9 01:12:57.808374 systemd[1573]: Reached target default.target - Main User Target. May 9 01:12:57.808401 systemd[1573]: Startup finished in 209ms. May 9 01:12:57.808552 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 01:12:57.816242 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 01:12:58.221688 systemd[1]: Started sshd@1-172.24.4.244:22-172.24.4.1:43804.service - OpenSSH per-connection server daemon (172.24.4.1:43804). May 9 01:12:59.197157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:12:59.216927 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 01:13:00.237381 sshd[1584]: Accepted publickey for core from 172.24.4.1 port 43804 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:13:00.240777 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:13:00.255832 systemd-logind[1455]: New session 2 of user core. May 9 01:13:00.263541 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 01:13:00.675406 kubelet[1592]: E0509 01:13:00.675296 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 01:13:00.680346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 01:13:00.680682 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 01:13:00.681855 systemd[1]: kubelet.service: Consumed 2.195s CPU time, 247.6M memory peak. May 9 01:13:00.816037 sshd[1600]: Connection closed by 172.24.4.1 port 43804 May 9 01:13:00.816177 sshd-session[1584]: pam_unix(sshd:session): session closed for user core May 9 01:13:00.834652 systemd[1]: sshd@1-172.24.4.244:22-172.24.4.1:43804.service: Deactivated successfully. May 9 01:13:00.838574 systemd[1]: session-2.scope: Deactivated successfully. May 9 01:13:00.841072 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. May 9 01:13:00.846322 systemd[1]: Started sshd@2-172.24.4.244:22-172.24.4.1:43818.service - OpenSSH per-connection server daemon (172.24.4.1:43818). May 9 01:13:00.854965 systemd-logind[1455]: Removed session 2. May 9 01:13:01.564715 login[1542]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 9 01:13:01.583069 systemd-logind[1455]: New session 3 of user core. May 9 01:13:01.589391 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 01:13:01.589329 login[1543]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 9 01:13:01.605643 systemd-logind[1455]: New session 4 of user core. May 9 01:13:01.616167 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 01:13:02.205303 sshd[1607]: Accepted publickey for core from 172.24.4.1 port 43818 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:13:02.208107 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:13:02.218786 systemd-logind[1455]: New session 5 of user core. May 9 01:13:02.230383 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 01:13:02.762886 coreos-metadata[1445]: May 09 01:13:02.762 WARN failed to locate config-drive, using the metadata service API instead May 9 01:13:02.857090 coreos-metadata[1445]: May 09 01:13:02.857 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 9 01:13:03.089524 sshd[1636]: Connection closed by 172.24.4.1 port 43818 May 9 01:13:03.090602 sshd-session[1607]: pam_unix(sshd:session): session closed for user core May 9 01:13:03.097447 systemd[1]: sshd@2-172.24.4.244:22-172.24.4.1:43818.service: Deactivated successfully. May 9 01:13:03.101548 systemd[1]: session-5.scope: Deactivated successfully. May 9 01:13:03.105239 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. May 9 01:13:03.108089 systemd-logind[1455]: Removed session 5. May 9 01:13:03.119657 coreos-metadata[1445]: May 09 01:13:03.119 INFO Fetch successful May 9 01:13:03.119657 coreos-metadata[1445]: May 09 01:13:03.119 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 9 01:13:03.134927 coreos-metadata[1445]: May 09 01:13:03.134 INFO Fetch successful May 9 01:13:03.135265 coreos-metadata[1445]: May 09 01:13:03.135 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 9 01:13:03.149882 coreos-metadata[1445]: May 09 01:13:03.149 INFO Fetch successful May 9 01:13:03.149882 coreos-metadata[1445]: May 09 01:13:03.149 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 9 01:13:03.166115 coreos-metadata[1445]: May 09 01:13:03.165 INFO Fetch successful May 9 01:13:03.166337 coreos-metadata[1445]: May 09 01:13:03.166 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 9 01:13:03.180407 coreos-metadata[1445]: May 09 01:13:03.180 INFO Fetch successful May 9 01:13:03.180407 coreos-metadata[1445]: May 09 01:13:03.180 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 9 01:13:03.194104 coreos-metadata[1445]: May 09 01:13:03.194 INFO Fetch successful May 9 01:13:03.246041 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 9 01:13:03.248218 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 01:13:03.294588 coreos-metadata[1513]: May 09 01:13:03.294 WARN failed to locate config-drive, using the metadata service API instead May 9 01:13:03.337028 coreos-metadata[1513]: May 09 01:13:03.336 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 9 01:13:03.351523 coreos-metadata[1513]: May 09 01:13:03.351 INFO Fetch successful May 9 01:13:03.351523 coreos-metadata[1513]: May 09 01:13:03.351 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 9 01:13:03.367238 coreos-metadata[1513]: May 09 01:13:03.367 INFO Fetch successful May 9 01:13:03.372930 unknown[1513]: wrote ssh authorized keys file for user: core May 9 01:13:03.419999 update-ssh-keys[1650]: Updated "/home/core/.ssh/authorized_keys" May 9 01:13:03.421102 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 9 01:13:03.425296 systemd[1]: Finished sshkeys.service. May 9 01:13:03.430424 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 01:13:03.431028 systemd[1]: Startup finished in 1.299s (kernel) + 15.444s (initrd) + 11.225s (userspace) = 27.969s. May 9 01:13:10.923743 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 01:13:10.927534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:13:11.266496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:13:11.273385 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 01:13:11.326128 kubelet[1661]: E0509 01:13:11.325960 1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 01:13:11.333227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 01:13:11.333378 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 01:13:11.333718 systemd[1]: kubelet.service: Consumed 254ms CPU time, 94.9M memory peak. May 9 01:13:13.110934 systemd[1]: Started sshd@3-172.24.4.244:22-172.24.4.1:59098.service - OpenSSH per-connection server daemon (172.24.4.1:59098). May 9 01:13:14.450552 sshd[1672]: Accepted publickey for core from 172.24.4.1 port 59098 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:13:14.453292 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:13:14.465078 systemd-logind[1455]: New session 6 of user core. May 9 01:13:14.468278 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 01:13:15.064321 sshd[1674]: Connection closed by 172.24.4.1 port 59098 May 9 01:13:15.065367 sshd-session[1672]: pam_unix(sshd:session): session closed for user core May 9 01:13:15.082563 systemd[1]: sshd@3-172.24.4.244:22-172.24.4.1:59098.service: Deactivated successfully. May 9 01:13:15.086222 systemd[1]: session-6.scope: Deactivated successfully. May 9 01:13:15.091410 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. May 9 01:13:15.093672 systemd[1]: Started sshd@4-172.24.4.244:22-172.24.4.1:57474.service - OpenSSH per-connection server daemon (172.24.4.1:57474). May 9 01:13:15.096666 systemd-logind[1455]: Removed session 6. May 9 01:13:16.417414 sshd[1679]: Accepted publickey for core from 172.24.4.1 port 57474 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:13:16.420767 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:13:16.434596 systemd-logind[1455]: New session 7 of user core. May 9 01:13:16.443386 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 01:13:16.970034 sshd[1682]: Connection closed by 172.24.4.1 port 57474 May 9 01:13:16.969320 sshd-session[1679]: pam_unix(sshd:session): session closed for user core May 9 01:13:16.986334 systemd[1]: sshd@4-172.24.4.244:22-172.24.4.1:57474.service: Deactivated successfully. May 9 01:13:16.990315 systemd[1]: session-7.scope: Deactivated successfully. May 9 01:13:16.992796 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. May 9 01:13:16.997755 systemd[1]: Started sshd@5-172.24.4.244:22-172.24.4.1:57486.service - OpenSSH per-connection server daemon (172.24.4.1:57486). May 9 01:13:17.001060 systemd-logind[1455]: Removed session 7. May 9 01:13:18.368202 sshd[1687]: Accepted publickey for core from 172.24.4.1 port 57486 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:13:18.370925 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:13:18.382125 systemd-logind[1455]: New session 8 of user core. May 9 01:13:18.390277 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 01:13:19.090023 sshd[1690]: Connection closed by 172.24.4.1 port 57486 May 9 01:13:19.088768 sshd-session[1687]: pam_unix(sshd:session): session closed for user core May 9 01:13:19.103740 systemd[1]: sshd@5-172.24.4.244:22-172.24.4.1:57486.service: Deactivated successfully. May 9 01:13:19.107027 systemd[1]: session-8.scope: Deactivated successfully. May 9 01:13:19.108778 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. May 9 01:13:19.114552 systemd[1]: Started sshd@6-172.24.4.244:22-172.24.4.1:57490.service - OpenSSH per-connection server daemon (172.24.4.1:57490). May 9 01:13:19.117084 systemd-logind[1455]: Removed session 8. May 9 01:13:20.554424 sshd[1695]: Accepted publickey for core from 172.24.4.1 port 57490 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:13:20.557106 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:13:20.569076 systemd-logind[1455]: New session 9 of user core. May 9 01:13:20.572286 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 01:13:21.021423 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 01:13:21.022191 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 01:13:21.044621 sudo[1699]: pam_unix(sudo:session): session closed for user root May 9 01:13:21.264020 sshd[1698]: Connection closed by 172.24.4.1 port 57490 May 9 01:13:21.264453 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 9 01:13:21.281643 systemd[1]: sshd@6-172.24.4.244:22-172.24.4.1:57490.service: Deactivated successfully. May 9 01:13:21.284698 systemd[1]: session-9.scope: Deactivated successfully. May 9 01:13:21.286325 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. May 9 01:13:21.290935 systemd[1]: Started sshd@7-172.24.4.244:22-172.24.4.1:57500.service - OpenSSH per-connection server daemon (172.24.4.1:57500). May 9 01:13:21.293211 systemd-logind[1455]: Removed session 9. May 9 01:13:21.423062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 01:13:21.426969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:13:21.835894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:13:21.854748 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 01:13:21.946420 kubelet[1715]: E0509 01:13:21.946288 1715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 01:13:21.950591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 01:13:21.950884 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 01:13:21.951639 systemd[1]: kubelet.service: Consumed 304ms CPU time, 97.8M memory peak. May 9 01:13:22.530131 sshd[1704]: Accepted publickey for core from 172.24.4.1 port 57500 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:13:22.533044 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:13:22.545790 systemd-logind[1455]: New session 10 of user core. May 9 01:13:22.555304 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 01:13:23.002712 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 01:13:23.003439 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 01:13:23.010944 sudo[1725]: pam_unix(sudo:session): session closed for user root May 9 01:13:23.022971 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 01:13:23.024387 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 01:13:23.046101 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 01:13:23.123758 augenrules[1747]: No rules May 9 01:13:23.125301 systemd[1]: audit-rules.service: Deactivated successfully. May 9 01:13:23.125809 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 01:13:23.127885 sudo[1724]: pam_unix(sudo:session): session closed for user root May 9 01:13:23.309827 sshd[1723]: Connection closed by 172.24.4.1 port 57500 May 9 01:13:23.310736 sshd-session[1704]: pam_unix(sshd:session): session closed for user core May 9 01:13:23.327906 systemd[1]: sshd@7-172.24.4.244:22-172.24.4.1:57500.service: Deactivated successfully. May 9 01:13:23.333333 systemd[1]: session-10.scope: Deactivated successfully. May 9 01:13:23.337033 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. May 9 01:13:23.339847 systemd[1]: Started sshd@8-172.24.4.244:22-172.24.4.1:57514.service - OpenSSH per-connection server daemon (172.24.4.1:57514). May 9 01:13:23.343192 systemd-logind[1455]: Removed session 10. May 9 01:13:24.534308 sshd[1755]: Accepted publickey for core from 172.24.4.1 port 57514 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:13:24.537032 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:13:24.547821 systemd-logind[1455]: New session 11 of user core. May 9 01:13:24.556377 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 01:13:24.973620 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 01:13:24.974306 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 01:13:25.725424 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 01:13:25.739471 (dockerd)[1777]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 01:13:26.268097 dockerd[1777]: time="2025-05-09T01:13:26.267944852Z" level=info msg="Starting up" May 9 01:13:26.274319 dockerd[1777]: time="2025-05-09T01:13:26.274252723Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 9 01:13:26.328520 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4015802563-merged.mount: Deactivated successfully. May 9 01:13:26.357279 dockerd[1777]: time="2025-05-09T01:13:26.357197325Z" level=info msg="Loading containers: start." May 9 01:13:26.531045 kernel: Initializing XFRM netlink socket May 9 01:13:26.606261 systemd-networkd[1396]: docker0: Link UP May 9 01:13:26.649924 dockerd[1777]: time="2025-05-09T01:13:26.649886724Z" level=info msg="Loading containers: done." May 9 01:13:26.671016 dockerd[1777]: time="2025-05-09T01:13:26.670669673Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 01:13:26.671016 dockerd[1777]: time="2025-05-09T01:13:26.670753780Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 9 01:13:26.671016 dockerd[1777]: time="2025-05-09T01:13:26.670848979Z" level=info msg="Daemon has completed initialization" May 9 01:13:26.715054 dockerd[1777]: time="2025-05-09T01:13:26.714947125Z" level=info msg="API listen on /run/docker.sock" May 9 01:13:26.715498 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 01:13:28.529323 containerd[1478]: time="2025-05-09T01:13:28.529283342Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 01:13:29.312354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount930847503.mount: Deactivated successfully. May 9 01:13:31.219239 containerd[1478]: time="2025-05-09T01:13:31.219106817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:31.220502 containerd[1478]: time="2025-05-09T01:13:31.220234049Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674881" May 9 01:13:31.221939 containerd[1478]: time="2025-05-09T01:13:31.221871272Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:31.225078 containerd[1478]: time="2025-05-09T01:13:31.225002918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:31.226518 containerd[1478]: time="2025-05-09T01:13:31.226054478Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.696723477s" May 9 01:13:31.226518 containerd[1478]: time="2025-05-09T01:13:31.226098912Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 9 01:13:31.249826 containerd[1478]: time="2025-05-09T01:13:31.249770522Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 9 01:13:32.173277 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 9 01:13:32.179488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:13:32.341108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:13:32.350317 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 01:13:32.401789 kubelet[2051]: E0509 01:13:32.401645 2051 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 01:13:32.404441 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 01:13:32.404572 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 01:13:32.404923 systemd[1]: kubelet.service: Consumed 183ms CPU time, 95.4M memory peak. May 9 01:13:33.626380 containerd[1478]: time="2025-05-09T01:13:33.626296992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:33.627706 containerd[1478]: time="2025-05-09T01:13:33.627472443Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617542" May 9 01:13:33.628788 containerd[1478]: time="2025-05-09T01:13:33.628724850Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:33.631917 containerd[1478]: time="2025-05-09T01:13:33.631844903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:33.632957 containerd[1478]: time="2025-05-09T01:13:33.632836609Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 2.382821837s" May 9 01:13:33.632957 containerd[1478]: time="2025-05-09T01:13:33.632871995Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 9 01:13:33.654875 containerd[1478]: time="2025-05-09T01:13:33.654833826Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 9 01:13:35.495430 containerd[1478]: time="2025-05-09T01:13:35.493702229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:35.496164 containerd[1478]: time="2025-05-09T01:13:35.496104698Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903690" May 9 01:13:35.497928 containerd[1478]: time="2025-05-09T01:13:35.497889927Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:35.500783 containerd[1478]: time="2025-05-09T01:13:35.500722866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:35.502202 containerd[1478]: time="2025-05-09T01:13:35.502015247Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.846901956s" May 9 01:13:35.502202 containerd[1478]: time="2025-05-09T01:13:35.502080629Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 9 01:13:35.524808 containerd[1478]: time="2025-05-09T01:13:35.524767076Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 01:13:36.896131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3978906647.mount: Deactivated successfully. May 9 01:13:37.401984 containerd[1478]: time="2025-05-09T01:13:37.401898206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:37.403624 containerd[1478]: time="2025-05-09T01:13:37.403444833Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185825" May 9 01:13:37.405210 containerd[1478]: time="2025-05-09T01:13:37.405138128Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:37.407557 containerd[1478]: time="2025-05-09T01:13:37.407512012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:37.408490 containerd[1478]: time="2025-05-09T01:13:37.408127159Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 1.883316261s" May 9 01:13:37.408490 containerd[1478]: time="2025-05-09T01:13:37.408179537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 9 01:13:37.427082 containerd[1478]: time="2025-05-09T01:13:37.427035355Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 01:13:38.077787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1007597656.mount: Deactivated successfully. May 9 01:13:39.708446 containerd[1478]: time="2025-05-09T01:13:39.708280080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:39.712312 containerd[1478]: time="2025-05-09T01:13:39.712203135Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 9 01:13:39.714123 containerd[1478]: time="2025-05-09T01:13:39.714059595Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:39.721366 containerd[1478]: time="2025-05-09T01:13:39.721270110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:39.726614 containerd[1478]: time="2025-05-09T01:13:39.724786370Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.29769489s" May 9 01:13:39.726614 containerd[1478]: time="2025-05-09T01:13:39.724860750Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 9 01:13:39.769174 containerd[1478]: time="2025-05-09T01:13:39.769095435Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 9 01:13:40.413337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount937190294.mount: Deactivated successfully. May 9 01:13:40.430161 containerd[1478]: time="2025-05-09T01:13:40.430064439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:40.432330 containerd[1478]: time="2025-05-09T01:13:40.432231783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" May 9 01:13:40.434172 containerd[1478]: time="2025-05-09T01:13:40.434037376Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:40.439377 containerd[1478]: time="2025-05-09T01:13:40.439247158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:40.441171 containerd[1478]: time="2025-05-09T01:13:40.441079813Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 671.898767ms" May 9 01:13:40.441171 containerd[1478]: time="2025-05-09T01:13:40.441150716Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 9 01:13:40.485763 containerd[1478]: time="2025-05-09T01:13:40.485366593Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 9 01:13:41.202736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount986707713.mount: Deactivated successfully. May 9 01:13:41.477168 update_engine[1461]: I20250509 01:13:41.476079 1461 update_attempter.cc:509] Updating boot flags... May 9 01:13:41.539011 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2178) May 9 01:13:41.651055 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2182) May 9 01:13:41.749012 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2182) May 9 01:13:42.423261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 9 01:13:42.427911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:13:42.593533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:13:42.603225 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 01:13:42.781352 kubelet[2226]: E0509 01:13:42.781068 2226 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 01:13:42.783952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 01:13:42.784130 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 01:13:42.784454 systemd[1]: kubelet.service: Consumed 291ms CPU time, 97.2M memory peak. May 9 01:13:44.198060 containerd[1478]: time="2025-05-09T01:13:44.196165406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:44.207118 containerd[1478]: time="2025-05-09T01:13:44.206941319Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" May 9 01:13:44.216338 containerd[1478]: time="2025-05-09T01:13:44.216059730Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:44.325495 containerd[1478]: time="2025-05-09T01:13:44.324829482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:13:44.327438 containerd[1478]: time="2025-05-09T01:13:44.327323577Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.841871473s" May 9 01:13:44.327564 containerd[1478]: time="2025-05-09T01:13:44.327436169Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 9 01:13:49.181323 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:13:49.181772 systemd[1]: kubelet.service: Consumed 291ms CPU time, 97.2M memory peak. May 9 01:13:49.186297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:13:49.228421 systemd[1]: Reload requested from client PID 2318 ('systemctl') (unit session-11.scope)... May 9 01:13:49.228458 systemd[1]: Reloading... May 9 01:13:49.343072 zram_generator::config[2367]: No configuration found. May 9 01:13:49.498382 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 01:13:49.624037 systemd[1]: Reloading finished in 394 ms. May 9 01:13:49.685619 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 01:13:49.685708 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 01:13:49.686052 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:13:49.686108 systemd[1]: kubelet.service: Consumed 109ms CPU time, 83.6M memory peak. May 9 01:13:49.687798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:13:50.105255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:13:50.134938 (kubelet)[2430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 01:13:50.217150 kubelet[2430]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 01:13:50.217150 kubelet[2430]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 01:13:50.217150 kubelet[2430]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 01:13:50.410599 kubelet[2430]: I0509 01:13:50.409549 2430 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 01:13:50.877189 kubelet[2430]: I0509 01:13:50.877126 2430 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 01:13:50.877460 kubelet[2430]: I0509 01:13:50.877437 2430 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 01:13:50.878167 kubelet[2430]: I0509 01:13:50.878132 2430 server.go:927] "Client rotation is on, will bootstrap in background" May 9 01:13:51.359028 kubelet[2430]: I0509 01:13:51.358440 2430 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 01:13:51.360845 kubelet[2430]: E0509 01:13:51.360366 2430 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.244:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:51.382949 kubelet[2430]: I0509 01:13:51.382835 2430 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 01:13:51.387912 kubelet[2430]: I0509 01:13:51.386390 2430 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 01:13:51.387912 kubelet[2430]: I0509 01:13:51.386484 2430 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-58e4f3488e.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 01:13:51.387912 kubelet[2430]: I0509 01:13:51.387350 2430 topology_manager.go:138] "Creating topology manager with none policy" May 9 01:13:51.387912 kubelet[2430]: I0509 01:13:51.387378 2430 container_manager_linux.go:301] "Creating device plugin manager" May 9 01:13:51.388482 kubelet[2430]: I0509 01:13:51.387633 2430 state_mem.go:36] "Initialized new in-memory state store" May 9 01:13:51.390506 kubelet[2430]: I0509 01:13:51.389796 2430 kubelet.go:400] "Attempting to sync node with API server" May 9 01:13:51.390506 kubelet[2430]: I0509 01:13:51.389841 2430 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 01:13:51.390506 kubelet[2430]: I0509 01:13:51.389889 2430 kubelet.go:312] "Adding apiserver pod source" May 9 01:13:51.390506 kubelet[2430]: I0509 01:13:51.389940 2430 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 01:13:51.402053 kubelet[2430]: W0509 01:13:51.400809 2430 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-58e4f3488e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:51.402053 kubelet[2430]: E0509 01:13:51.401025 2430 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-58e4f3488e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:51.402671 kubelet[2430]: I0509 01:13:51.402616 2430 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 9 01:13:51.406598 kubelet[2430]: I0509 01:13:51.406549 2430 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 01:13:51.406716 kubelet[2430]: W0509 01:13:51.406657 2430 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 01:13:51.408199 kubelet[2430]: I0509 01:13:51.408130 2430 server.go:1264] "Started kubelet" May 9 01:13:51.423030 kubelet[2430]: W0509 01:13:51.422229 2430 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:51.423030 kubelet[2430]: E0509 01:13:51.422338 2430 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:51.423030 kubelet[2430]: I0509 01:13:51.422391 2430 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 01:13:51.423762 kubelet[2430]: I0509 01:13:51.423653 2430 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 01:13:51.424666 kubelet[2430]: I0509 01:13:51.424615 2430 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 01:13:51.425227 kubelet[2430]: E0509 01:13:51.424940 2430 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.244:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.244:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-n-58e4f3488e.novalocal.183db6cf2c580f83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-58e4f3488e.novalocal,UID:ci-4284-0-0-n-58e4f3488e.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-58e4f3488e.novalocal,},FirstTimestamp:2025-05-09 01:13:51.408074627 +0000 UTC m=+1.264764914,LastTimestamp:2025-05-09 01:13:51.408074627 +0000 UTC m=+1.264764914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-58e4f3488e.novalocal,}" May 9 01:13:51.425823 kubelet[2430]: I0509 01:13:51.425786 2430 server.go:455] "Adding debug handlers to kubelet server" May 9 01:13:51.431759 kubelet[2430]: I0509 01:13:51.431690 2430 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 01:13:51.436526 kubelet[2430]: E0509 01:13:51.436452 2430 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 01:13:51.441049 kubelet[2430]: E0509 01:13:51.440218 2430 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-58e4f3488e.novalocal\" not found" May 9 01:13:51.441049 kubelet[2430]: I0509 01:13:51.440312 2430 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 01:13:51.441049 kubelet[2430]: I0509 01:13:51.440526 2430 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 01:13:51.441049 kubelet[2430]: I0509 01:13:51.440623 2430 reconciler.go:26] "Reconciler: start to sync state" May 9 01:13:51.442509 kubelet[2430]: W0509 01:13:51.441332 2430 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:51.442509 kubelet[2430]: E0509 01:13:51.441436 2430 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:51.442509 kubelet[2430]: E0509 01:13:51.442422 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-58e4f3488e.novalocal?timeout=10s\": dial tcp 172.24.4.244:6443: connect: connection refused" interval="200ms" May 9 01:13:51.445246 kubelet[2430]: I0509 01:13:51.445200 2430 factory.go:221] Registration of the systemd container factory successfully May 9 01:13:51.445696 kubelet[2430]: I0509 01:13:51.445664 2430 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 01:13:51.449093 kubelet[2430]: I0509 01:13:51.447903 2430 factory.go:221] Registration of the containerd container factory successfully May 9 01:13:51.463106 kubelet[2430]: I0509 01:13:51.463070 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 01:13:51.464155 kubelet[2430]: I0509 01:13:51.464140 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 01:13:51.464225 kubelet[2430]: I0509 01:13:51.464217 2430 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 01:13:51.464298 kubelet[2430]: I0509 01:13:51.464289 2430 kubelet.go:2337] "Starting kubelet main sync loop" May 9 01:13:51.464399 kubelet[2430]: E0509 01:13:51.464376 2430 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 01:13:51.473501 kubelet[2430]: W0509 01:13:51.473457 2430 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:51.473646 kubelet[2430]: E0509 01:13:51.473634 2430 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:51.480960 kubelet[2430]: I0509 01:13:51.480939 2430 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 01:13:51.481117 kubelet[2430]: I0509 01:13:51.481106 2430 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 01:13:51.481181 kubelet[2430]: I0509 01:13:51.481171 2430 state_mem.go:36] "Initialized new in-memory state store" May 9 01:13:51.493774 kubelet[2430]: I0509 01:13:51.493761 2430 policy_none.go:49] "None policy: Start" May 9 01:13:51.494515 kubelet[2430]: I0509 01:13:51.494483 2430 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 01:13:51.494515 kubelet[2430]: I0509 01:13:51.494517 2430 state_mem.go:35] "Initializing new in-memory state store" May 9 01:13:51.500692 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 01:13:51.514665 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 01:13:51.526790 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 01:13:51.528537 kubelet[2430]: I0509 01:13:51.528490 2430 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 01:13:51.528724 kubelet[2430]: I0509 01:13:51.528678 2430 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 01:13:51.529394 kubelet[2430]: I0509 01:13:51.529366 2430 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 01:13:51.534305 kubelet[2430]: E0509 01:13:51.533803 2430 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-n-58e4f3488e.novalocal\" not found" May 9 01:13:51.543122 kubelet[2430]: I0509 01:13:51.542386 2430 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.543122 kubelet[2430]: E0509 01:13:51.542683 2430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.244:6443/api/v1/nodes\": dial tcp 172.24.4.244:6443: connect: connection refused" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.565025 kubelet[2430]: I0509 01:13:51.564811 2430 topology_manager.go:215] "Topology Admit Handler" podUID="cdce352c4dc1895da3487643d226a42c" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.567861 kubelet[2430]: I0509 01:13:51.567475 2430 topology_manager.go:215] "Topology Admit Handler" podUID="85e077bbafe6b3f7784c3ddad080445e" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.570922 kubelet[2430]: I0509 01:13:51.570538 2430 topology_manager.go:215] "Topology Admit Handler" podUID="10d939345df56b44868f2dd96604a79b" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.583792 systemd[1]: Created slice kubepods-burstable-podcdce352c4dc1895da3487643d226a42c.slice - libcontainer container kubepods-burstable-podcdce352c4dc1895da3487643d226a42c.slice. May 9 01:13:51.603681 systemd[1]: Created slice kubepods-burstable-pod85e077bbafe6b3f7784c3ddad080445e.slice - libcontainer container kubepods-burstable-pod85e077bbafe6b3f7784c3ddad080445e.slice. May 9 01:13:51.614659 systemd[1]: Created slice kubepods-burstable-pod10d939345df56b44868f2dd96604a79b.slice - libcontainer container kubepods-burstable-pod10d939345df56b44868f2dd96604a79b.slice. May 9 01:13:51.643838 kubelet[2430]: E0509 01:13:51.643765 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-58e4f3488e.novalocal?timeout=10s\": dial tcp 172.24.4.244:6443: connect: connection refused" interval="400ms" May 9 01:13:51.742440 kubelet[2430]: I0509 01:13:51.742364 2430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdce352c4dc1895da3487643d226a42c-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"cdce352c4dc1895da3487643d226a42c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.742440 kubelet[2430]: I0509 01:13:51.742451 2430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdce352c4dc1895da3487643d226a42c-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"cdce352c4dc1895da3487643d226a42c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.742738 kubelet[2430]: I0509 01:13:51.742503 2430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/85e077bbafe6b3f7784c3ddad080445e-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"85e077bbafe6b3f7784c3ddad080445e\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.742738 kubelet[2430]: I0509 01:13:51.742552 2430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85e077bbafe6b3f7784c3ddad080445e-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"85e077bbafe6b3f7784c3ddad080445e\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.742738 kubelet[2430]: I0509 01:13:51.742602 2430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85e077bbafe6b3f7784c3ddad080445e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"85e077bbafe6b3f7784c3ddad080445e\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.742738 kubelet[2430]: I0509 01:13:51.742649 2430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10d939345df56b44868f2dd96604a79b-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"10d939345df56b44868f2dd96604a79b\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.743042 kubelet[2430]: I0509 01:13:51.742696 2430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdce352c4dc1895da3487643d226a42c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"cdce352c4dc1895da3487643d226a42c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.743042 kubelet[2430]: I0509 01:13:51.742738 2430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85e077bbafe6b3f7784c3ddad080445e-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"85e077bbafe6b3f7784c3ddad080445e\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.743042 kubelet[2430]: I0509 01:13:51.742790 2430 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85e077bbafe6b3f7784c3ddad080445e-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"85e077bbafe6b3f7784c3ddad080445e\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.746409 kubelet[2430]: I0509 01:13:51.746307 2430 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.747081 kubelet[2430]: E0509 01:13:51.746952 2430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.244:6443/api/v1/nodes\": dial tcp 172.24.4.244:6443: connect: connection refused" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:51.900069 containerd[1478]: time="2025-05-09T01:13:51.899843305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal,Uid:cdce352c4dc1895da3487643d226a42c,Namespace:kube-system,Attempt:0,}" May 9 01:13:51.911060 containerd[1478]: time="2025-05-09T01:13:51.910949566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal,Uid:85e077bbafe6b3f7784c3ddad080445e,Namespace:kube-system,Attempt:0,}" May 9 01:13:51.925736 containerd[1478]: time="2025-05-09T01:13:51.925274572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-58e4f3488e.novalocal,Uid:10d939345df56b44868f2dd96604a79b,Namespace:kube-system,Attempt:0,}" May 9 01:13:52.044971 kubelet[2430]: E0509 01:13:52.044891 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-58e4f3488e.novalocal?timeout=10s\": dial tcp 172.24.4.244:6443: connect: connection refused" interval="800ms" May 9 01:13:52.151517 kubelet[2430]: I0509 01:13:52.151239 2430 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:52.152929 kubelet[2430]: E0509 01:13:52.152813 2430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.244:6443/api/v1/nodes\": dial tcp 172.24.4.244:6443: connect: connection refused" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:52.287743 kubelet[2430]: W0509 01:13:52.287595 2430 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:52.287743 kubelet[2430]: E0509 01:13:52.287734 2430 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.244:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:52.512794 kubelet[2430]: W0509 01:13:52.512678 2430 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:52.512818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085751956.mount: Deactivated successfully. May 9 01:13:52.516913 kubelet[2430]: E0509 01:13:52.515108 2430 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.244:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:52.526805 containerd[1478]: time="2025-05-09T01:13:52.526689892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 01:13:52.530253 containerd[1478]: time="2025-05-09T01:13:52.530131293Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 01:13:52.533386 containerd[1478]: time="2025-05-09T01:13:52.533287769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 9 01:13:52.535016 containerd[1478]: time="2025-05-09T01:13:52.534876452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 9 01:13:52.538826 containerd[1478]: time="2025-05-09T01:13:52.538767116Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 01:13:52.541892 containerd[1478]: time="2025-05-09T01:13:52.541601697Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 9 01:13:52.543609 containerd[1478]: time="2025-05-09T01:13:52.543470746Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 01:13:52.550020 containerd[1478]: time="2025-05-09T01:13:52.548888577Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 642.192337ms" May 9 01:13:52.551839 containerd[1478]: time="2025-05-09T01:13:52.551754898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 01:13:52.553055 containerd[1478]: time="2025-05-09T01:13:52.552869311Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 637.831928ms" May 9 01:13:52.597260 containerd[1478]: time="2025-05-09T01:13:52.596935841Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 662.425489ms" May 9 01:13:52.614036 containerd[1478]: time="2025-05-09T01:13:52.613130514Z" level=info msg="connecting to shim e9abf94577906a6811ccc2d50c318e9d57728aa90bddabc826b06a1dba66a0f6" address="unix:///run/containerd/s/0314fbbb3e51f3dc0a0d0e82c330ec3ff7ef3c2db8e82d40ffa6d9d490025b01" namespace=k8s.io protocol=ttrpc version=3 May 9 01:13:52.624850 containerd[1478]: time="2025-05-09T01:13:52.624803208Z" level=info msg="connecting to shim 51c913f5f0583b4f10c48c188e7699c65f048299d2e85b9a1974b1e6f9c14e73" address="unix:///run/containerd/s/758915bf6573907fe485a86ebb56272fe01fc234e3430ad5770af1bf716f0c8f" namespace=k8s.io protocol=ttrpc version=3 May 9 01:13:52.634711 containerd[1478]: time="2025-05-09T01:13:52.634652189Z" level=info msg="connecting to shim 7ebf76892767ef67a00f34bc3745db9046e547d451416f2ba35cc18141506428" address="unix:///run/containerd/s/82f1c68707f24dcf7198e288f2ab1cb81191f6844eeefabbfbdf42416107a92c" namespace=k8s.io protocol=ttrpc version=3 May 9 01:13:52.663495 systemd[1]: Started cri-containerd-e9abf94577906a6811ccc2d50c318e9d57728aa90bddabc826b06a1dba66a0f6.scope - libcontainer container e9abf94577906a6811ccc2d50c318e9d57728aa90bddabc826b06a1dba66a0f6. May 9 01:13:52.670046 systemd[1]: Started cri-containerd-51c913f5f0583b4f10c48c188e7699c65f048299d2e85b9a1974b1e6f9c14e73.scope - libcontainer container 51c913f5f0583b4f10c48c188e7699c65f048299d2e85b9a1974b1e6f9c14e73. May 9 01:13:52.677857 systemd[1]: Started cri-containerd-7ebf76892767ef67a00f34bc3745db9046e547d451416f2ba35cc18141506428.scope - libcontainer container 7ebf76892767ef67a00f34bc3745db9046e547d451416f2ba35cc18141506428. May 9 01:13:52.745810 containerd[1478]: time="2025-05-09T01:13:52.745773697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal,Uid:cdce352c4dc1895da3487643d226a42c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9abf94577906a6811ccc2d50c318e9d57728aa90bddabc826b06a1dba66a0f6\"" May 9 01:13:52.754183 containerd[1478]: time="2025-05-09T01:13:52.754114916Z" level=info msg="CreateContainer within sandbox \"e9abf94577906a6811ccc2d50c318e9d57728aa90bddabc826b06a1dba66a0f6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 01:13:52.758833 containerd[1478]: time="2025-05-09T01:13:52.758695065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal,Uid:85e077bbafe6b3f7784c3ddad080445e,Namespace:kube-system,Attempt:0,} returns sandbox id \"51c913f5f0583b4f10c48c188e7699c65f048299d2e85b9a1974b1e6f9c14e73\"" May 9 01:13:52.762576 containerd[1478]: time="2025-05-09T01:13:52.762470663Z" level=info msg="CreateContainer within sandbox \"51c913f5f0583b4f10c48c188e7699c65f048299d2e85b9a1974b1e6f9c14e73\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 01:13:52.774382 containerd[1478]: time="2025-05-09T01:13:52.773772041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-58e4f3488e.novalocal,Uid:10d939345df56b44868f2dd96604a79b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ebf76892767ef67a00f34bc3745db9046e547d451416f2ba35cc18141506428\"" May 9 01:13:52.776676 containerd[1478]: time="2025-05-09T01:13:52.776648841Z" level=info msg="CreateContainer within sandbox \"7ebf76892767ef67a00f34bc3745db9046e547d451416f2ba35cc18141506428\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 01:13:52.780449 containerd[1478]: time="2025-05-09T01:13:52.780421433Z" level=info msg="Container fb922d3b219512ce034fdf1edf71ffd7fb638432f6a9dee9fdede9f290b5599e: CDI devices from CRI Config.CDIDevices: []" May 9 01:13:52.783069 containerd[1478]: time="2025-05-09T01:13:52.782736419Z" level=info msg="Container dd51ebaa7ac78fdd1d183985c4ec656e367275ad852ce96f3e89677d394c5556: CDI devices from CRI Config.CDIDevices: []" May 9 01:13:52.794883 containerd[1478]: time="2025-05-09T01:13:52.794850051Z" level=info msg="CreateContainer within sandbox \"e9abf94577906a6811ccc2d50c318e9d57728aa90bddabc826b06a1dba66a0f6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb922d3b219512ce034fdf1edf71ffd7fb638432f6a9dee9fdede9f290b5599e\"" May 9 01:13:52.795765 containerd[1478]: time="2025-05-09T01:13:52.795720606Z" level=info msg="StartContainer for \"fb922d3b219512ce034fdf1edf71ffd7fb638432f6a9dee9fdede9f290b5599e\"" May 9 01:13:52.798591 containerd[1478]: time="2025-05-09T01:13:52.797891632Z" level=info msg="connecting to shim fb922d3b219512ce034fdf1edf71ffd7fb638432f6a9dee9fdede9f290b5599e" address="unix:///run/containerd/s/0314fbbb3e51f3dc0a0d0e82c330ec3ff7ef3c2db8e82d40ffa6d9d490025b01" protocol=ttrpc version=3 May 9 01:13:52.800858 containerd[1478]: time="2025-05-09T01:13:52.800825981Z" level=info msg="Container af7404a3466f142b0ae1e0aefe9a601b8692b7f8a0d0412c02bbfdff25f80f26: CDI devices from CRI Config.CDIDevices: []" May 9 01:13:52.819738 containerd[1478]: time="2025-05-09T01:13:52.819648207Z" level=info msg="CreateContainer within sandbox \"7ebf76892767ef67a00f34bc3745db9046e547d451416f2ba35cc18141506428\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"af7404a3466f142b0ae1e0aefe9a601b8692b7f8a0d0412c02bbfdff25f80f26\"" May 9 01:13:52.820862 containerd[1478]: time="2025-05-09T01:13:52.820377367Z" level=info msg="StartContainer for \"af7404a3466f142b0ae1e0aefe9a601b8692b7f8a0d0412c02bbfdff25f80f26\"" May 9 01:13:52.821438 systemd[1]: Started cri-containerd-fb922d3b219512ce034fdf1edf71ffd7fb638432f6a9dee9fdede9f290b5599e.scope - libcontainer container fb922d3b219512ce034fdf1edf71ffd7fb638432f6a9dee9fdede9f290b5599e. May 9 01:13:52.826281 containerd[1478]: time="2025-05-09T01:13:52.825944929Z" level=info msg="connecting to shim af7404a3466f142b0ae1e0aefe9a601b8692b7f8a0d0412c02bbfdff25f80f26" address="unix:///run/containerd/s/82f1c68707f24dcf7198e288f2ab1cb81191f6844eeefabbfbdf42416107a92c" protocol=ttrpc version=3 May 9 01:13:52.827611 kubelet[2430]: W0509 01:13:52.827413 2430 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-58e4f3488e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:52.827611 kubelet[2430]: E0509 01:13:52.827476 2430 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.244:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-58e4f3488e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:52.834534 containerd[1478]: time="2025-05-09T01:13:52.834056928Z" level=info msg="CreateContainer within sandbox \"51c913f5f0583b4f10c48c188e7699c65f048299d2e85b9a1974b1e6f9c14e73\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dd51ebaa7ac78fdd1d183985c4ec656e367275ad852ce96f3e89677d394c5556\"" May 9 01:13:52.835844 containerd[1478]: time="2025-05-09T01:13:52.835793118Z" level=info msg="StartContainer for \"dd51ebaa7ac78fdd1d183985c4ec656e367275ad852ce96f3e89677d394c5556\"" May 9 01:13:52.837856 containerd[1478]: time="2025-05-09T01:13:52.837816646Z" level=info msg="connecting to shim dd51ebaa7ac78fdd1d183985c4ec656e367275ad852ce96f3e89677d394c5556" address="unix:///run/containerd/s/758915bf6573907fe485a86ebb56272fe01fc234e3430ad5770af1bf716f0c8f" protocol=ttrpc version=3 May 9 01:13:52.846565 kubelet[2430]: E0509 01:13:52.846410 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.244:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-58e4f3488e.novalocal?timeout=10s\": dial tcp 172.24.4.244:6443: connect: connection refused" interval="1.6s" May 9 01:13:52.871588 kubelet[2430]: W0509 01:13:52.871530 2430 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:52.871588 kubelet[2430]: E0509 01:13:52.871592 2430 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.244:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.244:6443: connect: connection refused May 9 01:13:52.872118 systemd[1]: Started cri-containerd-af7404a3466f142b0ae1e0aefe9a601b8692b7f8a0d0412c02bbfdff25f80f26.scope - libcontainer container af7404a3466f142b0ae1e0aefe9a601b8692b7f8a0d0412c02bbfdff25f80f26. May 9 01:13:52.873686 systemd[1]: Started cri-containerd-dd51ebaa7ac78fdd1d183985c4ec656e367275ad852ce96f3e89677d394c5556.scope - libcontainer container dd51ebaa7ac78fdd1d183985c4ec656e367275ad852ce96f3e89677d394c5556. May 9 01:13:52.914409 containerd[1478]: time="2025-05-09T01:13:52.914360819Z" level=info msg="StartContainer for \"fb922d3b219512ce034fdf1edf71ffd7fb638432f6a9dee9fdede9f290b5599e\" returns successfully" May 9 01:13:52.956958 kubelet[2430]: I0509 01:13:52.955807 2430 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:52.956958 kubelet[2430]: E0509 01:13:52.956299 2430 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.244:6443/api/v1/nodes\": dial tcp 172.24.4.244:6443: connect: connection refused" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:52.965773 containerd[1478]: time="2025-05-09T01:13:52.965714820Z" level=info msg="StartContainer for \"dd51ebaa7ac78fdd1d183985c4ec656e367275ad852ce96f3e89677d394c5556\" returns successfully" May 9 01:13:52.983667 containerd[1478]: time="2025-05-09T01:13:52.983629473Z" level=info msg="StartContainer for \"af7404a3466f142b0ae1e0aefe9a601b8692b7f8a0d0412c02bbfdff25f80f26\" returns successfully" May 9 01:13:54.563713 kubelet[2430]: I0509 01:13:54.563121 2430 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:55.341696 kubelet[2430]: E0509 01:13:55.341648 2430 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284-0-0-n-58e4f3488e.novalocal\" not found" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:55.349316 kubelet[2430]: I0509 01:13:55.349279 2430 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:55.401676 kubelet[2430]: I0509 01:13:55.401628 2430 apiserver.go:52] "Watching apiserver" May 9 01:13:55.441549 kubelet[2430]: I0509 01:13:55.441483 2430 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 01:13:57.981173 systemd[1]: Reload requested from client PID 2703 ('systemctl') (unit session-11.scope)... May 9 01:13:57.982067 systemd[1]: Reloading... May 9 01:13:58.094072 zram_generator::config[2745]: No configuration found. May 9 01:13:58.275355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 01:13:58.417894 systemd[1]: Reloading finished in 435 ms. May 9 01:13:58.441334 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:13:58.441898 kubelet[2430]: E0509 01:13:58.441256 2430 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4284-0-0-n-58e4f3488e.novalocal.183db6cf2c580f83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-58e4f3488e.novalocal,UID:ci-4284-0-0-n-58e4f3488e.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-58e4f3488e.novalocal,},FirstTimestamp:2025-05-09 01:13:51.408074627 +0000 UTC m=+1.264764914,LastTimestamp:2025-05-09 01:13:51.408074627 +0000 UTC m=+1.264764914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-58e4f3488e.novalocal,}" May 9 01:13:58.451956 systemd[1]: kubelet.service: Deactivated successfully. May 9 01:13:58.452263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:13:58.452316 systemd[1]: kubelet.service: Consumed 1.235s CPU time, 113.6M memory peak. May 9 01:13:58.456225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:13:58.614938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:13:58.629823 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 01:13:58.693768 kubelet[2813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 01:13:58.694853 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 01:13:58.694853 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 01:13:58.694853 kubelet[2813]: I0509 01:13:58.694180 2813 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 01:13:58.703950 kubelet[2813]: I0509 01:13:58.703632 2813 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 01:13:58.703950 kubelet[2813]: I0509 01:13:58.703682 2813 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 01:13:58.705074 kubelet[2813]: I0509 01:13:58.704675 2813 server.go:927] "Client rotation is on, will bootstrap in background" May 9 01:13:58.708808 kubelet[2813]: I0509 01:13:58.708777 2813 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 01:13:58.712499 kubelet[2813]: I0509 01:13:58.712460 2813 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 01:13:58.724568 kubelet[2813]: I0509 01:13:58.722272 2813 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 01:13:58.724568 kubelet[2813]: I0509 01:13:58.722468 2813 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 01:13:58.724568 kubelet[2813]: I0509 01:13:58.722490 2813 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-58e4f3488e.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 01:13:58.724568 kubelet[2813]: I0509 01:13:58.722915 2813 topology_manager.go:138] "Creating topology manager with none policy" May 9 01:13:58.724837 kubelet[2813]: I0509 01:13:58.722926 2813 container_manager_linux.go:301] "Creating device plugin manager" May 9 01:13:58.724837 kubelet[2813]: I0509 01:13:58.722961 2813 state_mem.go:36] "Initialized new in-memory state store" May 9 01:13:58.724837 kubelet[2813]: I0509 01:13:58.723093 2813 kubelet.go:400] "Attempting to sync node with API server" May 9 01:13:58.724837 kubelet[2813]: I0509 01:13:58.723106 2813 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 01:13:58.724837 kubelet[2813]: I0509 01:13:58.723196 2813 kubelet.go:312] "Adding apiserver pod source" May 9 01:13:58.724837 kubelet[2813]: I0509 01:13:58.723212 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 01:13:58.730996 kubelet[2813]: I0509 01:13:58.729051 2813 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 9 01:13:58.731257 kubelet[2813]: I0509 01:13:58.731242 2813 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 01:13:58.731730 kubelet[2813]: I0509 01:13:58.731717 2813 server.go:1264] "Started kubelet" May 9 01:13:58.735343 kubelet[2813]: I0509 01:13:58.735327 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 01:13:58.740100 kubelet[2813]: I0509 01:13:58.740045 2813 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 01:13:58.741792 kubelet[2813]: I0509 01:13:58.741768 2813 server.go:455] "Adding debug handlers to kubelet server" May 9 01:13:58.744992 kubelet[2813]: I0509 01:13:58.743265 2813 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 01:13:58.744992 kubelet[2813]: I0509 01:13:58.743847 2813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 01:13:58.744992 kubelet[2813]: I0509 01:13:58.744904 2813 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 01:13:58.745174 kubelet[2813]: I0509 01:13:58.745162 2813 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 01:13:58.745353 kubelet[2813]: I0509 01:13:58.745342 2813 reconciler.go:26] "Reconciler: start to sync state" May 9 01:13:58.748588 kubelet[2813]: I0509 01:13:58.748491 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 01:13:58.751079 kubelet[2813]: I0509 01:13:58.751062 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 01:13:58.753359 kubelet[2813]: I0509 01:13:58.751165 2813 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 01:13:58.753359 kubelet[2813]: I0509 01:13:58.752652 2813 kubelet.go:2337] "Starting kubelet main sync loop" May 9 01:13:58.753359 kubelet[2813]: E0509 01:13:58.752694 2813 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 01:13:58.760587 kubelet[2813]: E0509 01:13:58.760551 2813 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 01:13:58.761034 kubelet[2813]: I0509 01:13:58.760890 2813 factory.go:221] Registration of the systemd container factory successfully May 9 01:13:58.761034 kubelet[2813]: I0509 01:13:58.760996 2813 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 01:13:58.763192 kubelet[2813]: I0509 01:13:58.763168 2813 factory.go:221] Registration of the containerd container factory successfully May 9 01:13:58.819059 kubelet[2813]: I0509 01:13:58.818994 2813 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 01:13:58.819059 kubelet[2813]: I0509 01:13:58.819048 2813 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 01:13:58.819059 kubelet[2813]: I0509 01:13:58.819065 2813 state_mem.go:36] "Initialized new in-memory state store" May 9 01:13:58.819252 kubelet[2813]: I0509 01:13:58.819236 2813 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 01:13:58.819284 kubelet[2813]: I0509 01:13:58.819248 2813 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 01:13:58.819284 kubelet[2813]: I0509 01:13:58.819268 2813 policy_none.go:49] "None policy: Start" May 9 01:13:58.820098 kubelet[2813]: I0509 01:13:58.820085 2813 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 01:13:58.820431 kubelet[2813]: I0509 01:13:58.820172 2813 state_mem.go:35] "Initializing new in-memory state store" May 9 01:13:58.820431 kubelet[2813]: I0509 01:13:58.820361 2813 state_mem.go:75] "Updated machine memory state" May 9 01:13:58.824834 kubelet[2813]: I0509 01:13:58.824615 2813 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 01:13:58.825464 kubelet[2813]: I0509 01:13:58.825429 2813 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 01:13:58.826715 kubelet[2813]: I0509 01:13:58.825953 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 01:13:58.847267 kubelet[2813]: I0509 01:13:58.847228 2813 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:58.853964 kubelet[2813]: I0509 01:13:58.853411 2813 topology_manager.go:215] "Topology Admit Handler" podUID="cdce352c4dc1895da3487643d226a42c" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:58.853964 kubelet[2813]: I0509 01:13:58.853499 2813 topology_manager.go:215] "Topology Admit Handler" podUID="85e077bbafe6b3f7784c3ddad080445e" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:58.853964 kubelet[2813]: I0509 01:13:58.853567 2813 topology_manager.go:215] "Topology Admit Handler" podUID="10d939345df56b44868f2dd96604a79b" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:58.868817 kubelet[2813]: I0509 01:13:58.867078 2813 kubelet_node_status.go:112] "Node was previously registered" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:58.868817 kubelet[2813]: I0509 01:13:58.867181 2813 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:58.871594 kubelet[2813]: W0509 01:13:58.871563 2813 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 01:13:58.879044 kubelet[2813]: W0509 01:13:58.879010 2813 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 01:13:58.879509 kubelet[2813]: W0509 01:13:58.879222 2813 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 01:13:59.047233 kubelet[2813]: I0509 01:13:59.046942 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10d939345df56b44868f2dd96604a79b-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"10d939345df56b44868f2dd96604a79b\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:59.047233 kubelet[2813]: I0509 01:13:59.046994 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdce352c4dc1895da3487643d226a42c-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"cdce352c4dc1895da3487643d226a42c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:59.047233 kubelet[2813]: I0509 01:13:59.047021 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdce352c4dc1895da3487643d226a42c-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"cdce352c4dc1895da3487643d226a42c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:59.047233 kubelet[2813]: I0509 01:13:59.047039 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/85e077bbafe6b3f7784c3ddad080445e-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"85e077bbafe6b3f7784c3ddad080445e\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:59.047233 kubelet[2813]: I0509 01:13:59.047058 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/85e077bbafe6b3f7784c3ddad080445e-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"85e077bbafe6b3f7784c3ddad080445e\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:59.047456 kubelet[2813]: I0509 01:13:59.047089 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdce352c4dc1895da3487643d226a42c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"cdce352c4dc1895da3487643d226a42c\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:59.047456 kubelet[2813]: I0509 01:13:59.047110 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/85e077bbafe6b3f7784c3ddad080445e-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"85e077bbafe6b3f7784c3ddad080445e\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:59.047456 kubelet[2813]: I0509 01:13:59.047132 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85e077bbafe6b3f7784c3ddad080445e-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"85e077bbafe6b3f7784c3ddad080445e\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:59.047456 kubelet[2813]: I0509 01:13:59.047152 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/85e077bbafe6b3f7784c3ddad080445e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal\" (UID: \"85e077bbafe6b3f7784c3ddad080445e\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:59.725957 kubelet[2813]: I0509 01:13:59.725879 2813 apiserver.go:52] "Watching apiserver" May 9 01:13:59.745668 kubelet[2813]: I0509 01:13:59.745608 2813 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 01:13:59.805009 kubelet[2813]: W0509 01:13:59.804938 2813 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 01:13:59.805206 kubelet[2813]: E0509 01:13:59.805048 2813 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:13:59.831353 kubelet[2813]: I0509 01:13:59.831049 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-58e4f3488e.novalocal" podStartSLOduration=1.831030557 podStartE2EDuration="1.831030557s" podCreationTimestamp="2025-05-09 01:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 01:13:59.821060867 +0000 UTC m=+1.186165346" watchObservedRunningTime="2025-05-09 01:13:59.831030557 +0000 UTC m=+1.196135026" May 9 01:13:59.845364 kubelet[2813]: I0509 01:13:59.845225 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-n-58e4f3488e.novalocal" podStartSLOduration=1.845207173 podStartE2EDuration="1.845207173s" podCreationTimestamp="2025-05-09 01:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 01:13:59.8439373 +0000 UTC m=+1.209041769" watchObservedRunningTime="2025-05-09 01:13:59.845207173 +0000 UTC m=+1.210311652" May 9 01:13:59.846424 kubelet[2813]: I0509 01:13:59.846321 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-n-58e4f3488e.novalocal" podStartSLOduration=1.846290836 podStartE2EDuration="1.846290836s" podCreationTimestamp="2025-05-09 01:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 01:13:59.83276396 +0000 UTC m=+1.197868489" watchObservedRunningTime="2025-05-09 01:13:59.846290836 +0000 UTC m=+1.211395355" May 9 01:14:04.647173 sudo[1759]: pam_unix(sudo:session): session closed for user root May 9 01:14:04.862792 sshd[1758]: Connection closed by 172.24.4.1 port 57514 May 9 01:14:04.863920 sshd-session[1755]: pam_unix(sshd:session): session closed for user core May 9 01:14:04.869952 systemd[1]: sshd@8-172.24.4.244:22-172.24.4.1:57514.service: Deactivated successfully. May 9 01:14:04.872559 systemd[1]: session-11.scope: Deactivated successfully. May 9 01:14:04.872824 systemd[1]: session-11.scope: Consumed 8.123s CPU time, 248M memory peak. May 9 01:14:04.875583 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. May 9 01:14:04.877111 systemd-logind[1455]: Removed session 11. May 9 01:14:14.091681 kubelet[2813]: I0509 01:14:14.091639 2813 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 01:14:14.092210 containerd[1478]: time="2025-05-09T01:14:14.092173535Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 01:14:14.092932 kubelet[2813]: I0509 01:14:14.092915 2813 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 01:14:14.642610 kubelet[2813]: I0509 01:14:14.642526 2813 topology_manager.go:215] "Topology Admit Handler" podUID="f6d9b2fc-2092-4bfe-9397-1fc2f9398c51" podNamespace="kube-system" podName="kube-proxy-7mbfc" May 9 01:14:14.668620 systemd[1]: Created slice kubepods-besteffort-podf6d9b2fc_2092_4bfe_9397_1fc2f9398c51.slice - libcontainer container kubepods-besteffort-podf6d9b2fc_2092_4bfe_9397_1fc2f9398c51.slice. May 9 01:14:14.747059 kubelet[2813]: I0509 01:14:14.746953 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6d9b2fc-2092-4bfe-9397-1fc2f9398c51-lib-modules\") pod \"kube-proxy-7mbfc\" (UID: \"f6d9b2fc-2092-4bfe-9397-1fc2f9398c51\") " pod="kube-system/kube-proxy-7mbfc" May 9 01:14:14.747059 kubelet[2813]: I0509 01:14:14.747018 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6d9b2fc-2092-4bfe-9397-1fc2f9398c51-xtables-lock\") pod \"kube-proxy-7mbfc\" (UID: \"f6d9b2fc-2092-4bfe-9397-1fc2f9398c51\") " pod="kube-system/kube-proxy-7mbfc" May 9 01:14:14.747059 kubelet[2813]: I0509 01:14:14.747040 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f6d9b2fc-2092-4bfe-9397-1fc2f9398c51-kube-proxy\") pod \"kube-proxy-7mbfc\" (UID: \"f6d9b2fc-2092-4bfe-9397-1fc2f9398c51\") " pod="kube-system/kube-proxy-7mbfc" May 9 01:14:14.747059 kubelet[2813]: I0509 01:14:14.747058 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfp74\" (UniqueName: \"kubernetes.io/projected/f6d9b2fc-2092-4bfe-9397-1fc2f9398c51-kube-api-access-hfp74\") pod \"kube-proxy-7mbfc\" (UID: \"f6d9b2fc-2092-4bfe-9397-1fc2f9398c51\") " pod="kube-system/kube-proxy-7mbfc" May 9 01:14:14.860717 kubelet[2813]: E0509 01:14:14.860659 2813 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 9 01:14:14.860955 kubelet[2813]: E0509 01:14:14.860765 2813 projected.go:200] Error preparing data for projected volume kube-api-access-hfp74 for pod kube-system/kube-proxy-7mbfc: configmap "kube-root-ca.crt" not found May 9 01:14:14.860955 kubelet[2813]: E0509 01:14:14.860881 2813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6d9b2fc-2092-4bfe-9397-1fc2f9398c51-kube-api-access-hfp74 podName:f6d9b2fc-2092-4bfe-9397-1fc2f9398c51 nodeName:}" failed. No retries permitted until 2025-05-09 01:14:15.360840749 +0000 UTC m=+16.725945268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hfp74" (UniqueName: "kubernetes.io/projected/f6d9b2fc-2092-4bfe-9397-1fc2f9398c51-kube-api-access-hfp74") pod "kube-proxy-7mbfc" (UID: "f6d9b2fc-2092-4bfe-9397-1fc2f9398c51") : configmap "kube-root-ca.crt" not found May 9 01:14:15.189903 kubelet[2813]: I0509 01:14:15.189841 2813 topology_manager.go:215] "Topology Admit Handler" podUID="f4a972aa-fd7a-4793-bf1f-52daf1dfe0f8" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-2mxzq" May 9 01:14:15.200862 systemd[1]: Created slice kubepods-besteffort-podf4a972aa_fd7a_4793_bf1f_52daf1dfe0f8.slice - libcontainer container kubepods-besteffort-podf4a972aa_fd7a_4793_bf1f_52daf1dfe0f8.slice. May 9 01:14:15.352934 kubelet[2813]: I0509 01:14:15.352879 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f4a972aa-fd7a-4793-bf1f-52daf1dfe0f8-var-lib-calico\") pod \"tigera-operator-797db67f8-2mxzq\" (UID: \"f4a972aa-fd7a-4793-bf1f-52daf1dfe0f8\") " pod="tigera-operator/tigera-operator-797db67f8-2mxzq" May 9 01:14:15.352934 kubelet[2813]: I0509 01:14:15.352920 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-744pk\" (UniqueName: \"kubernetes.io/projected/f4a972aa-fd7a-4793-bf1f-52daf1dfe0f8-kube-api-access-744pk\") pod \"tigera-operator-797db67f8-2mxzq\" (UID: \"f4a972aa-fd7a-4793-bf1f-52daf1dfe0f8\") " pod="tigera-operator/tigera-operator-797db67f8-2mxzq" May 9 01:14:15.505673 containerd[1478]: time="2025-05-09T01:14:15.505445003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-2mxzq,Uid:f4a972aa-fd7a-4793-bf1f-52daf1dfe0f8,Namespace:tigera-operator,Attempt:0,}" May 9 01:14:15.549896 containerd[1478]: time="2025-05-09T01:14:15.549799058Z" level=info msg="connecting to shim 946b4a710a12eeb781b474d3508a514492f772a40dd6392def17f94b55e6d8e0" address="unix:///run/containerd/s/af88736dd002e2250d9a67085fa8bd173c68af4fc819aacffe8db2567258af92" namespace=k8s.io protocol=ttrpc version=3 May 9 01:14:15.588139 containerd[1478]: time="2025-05-09T01:14:15.587553603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7mbfc,Uid:f6d9b2fc-2092-4bfe-9397-1fc2f9398c51,Namespace:kube-system,Attempt:0,}" May 9 01:14:15.597173 systemd[1]: Started cri-containerd-946b4a710a12eeb781b474d3508a514492f772a40dd6392def17f94b55e6d8e0.scope - libcontainer container 946b4a710a12eeb781b474d3508a514492f772a40dd6392def17f94b55e6d8e0. May 9 01:14:15.630727 containerd[1478]: time="2025-05-09T01:14:15.630670427Z" level=info msg="connecting to shim 76bcc5731293ed0dde940892a27a77b650ea863bf11c00d5b940adc5dfa9d1c8" address="unix:///run/containerd/s/7cb5fa5340bd3c6021ce2f0fedc742afddb76292057a995b8cf949d46898826a" namespace=k8s.io protocol=ttrpc version=3 May 9 01:14:15.656179 systemd[1]: Started cri-containerd-76bcc5731293ed0dde940892a27a77b650ea863bf11c00d5b940adc5dfa9d1c8.scope - libcontainer container 76bcc5731293ed0dde940892a27a77b650ea863bf11c00d5b940adc5dfa9d1c8. May 9 01:14:15.662802 containerd[1478]: time="2025-05-09T01:14:15.662756929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-2mxzq,Uid:f4a972aa-fd7a-4793-bf1f-52daf1dfe0f8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"946b4a710a12eeb781b474d3508a514492f772a40dd6392def17f94b55e6d8e0\"" May 9 01:14:15.665413 containerd[1478]: time="2025-05-09T01:14:15.665103842Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 9 01:14:15.691926 containerd[1478]: time="2025-05-09T01:14:15.691867007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7mbfc,Uid:f6d9b2fc-2092-4bfe-9397-1fc2f9398c51,Namespace:kube-system,Attempt:0,} returns sandbox id \"76bcc5731293ed0dde940892a27a77b650ea863bf11c00d5b940adc5dfa9d1c8\"" May 9 01:14:15.700553 containerd[1478]: time="2025-05-09T01:14:15.700499702Z" level=info msg="CreateContainer within sandbox \"76bcc5731293ed0dde940892a27a77b650ea863bf11c00d5b940adc5dfa9d1c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 01:14:15.714314 containerd[1478]: time="2025-05-09T01:14:15.714266708Z" level=info msg="Container fc87b22311ae4536c969f3f11906310f0244a4ed7231db467bf3f58ba66d9636: CDI devices from CRI Config.CDIDevices: []" May 9 01:14:15.727800 containerd[1478]: time="2025-05-09T01:14:15.727763257Z" level=info msg="CreateContainer within sandbox \"76bcc5731293ed0dde940892a27a77b650ea863bf11c00d5b940adc5dfa9d1c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fc87b22311ae4536c969f3f11906310f0244a4ed7231db467bf3f58ba66d9636\"" May 9 01:14:15.729144 containerd[1478]: time="2025-05-09T01:14:15.728664116Z" level=info msg="StartContainer for \"fc87b22311ae4536c969f3f11906310f0244a4ed7231db467bf3f58ba66d9636\"" May 9 01:14:15.730444 containerd[1478]: time="2025-05-09T01:14:15.730390886Z" level=info msg="connecting to shim fc87b22311ae4536c969f3f11906310f0244a4ed7231db467bf3f58ba66d9636" address="unix:///run/containerd/s/7cb5fa5340bd3c6021ce2f0fedc742afddb76292057a995b8cf949d46898826a" protocol=ttrpc version=3 May 9 01:14:15.750141 systemd[1]: Started cri-containerd-fc87b22311ae4536c969f3f11906310f0244a4ed7231db467bf3f58ba66d9636.scope - libcontainer container fc87b22311ae4536c969f3f11906310f0244a4ed7231db467bf3f58ba66d9636. May 9 01:14:15.805496 containerd[1478]: time="2025-05-09T01:14:15.804674135Z" level=info msg="StartContainer for \"fc87b22311ae4536c969f3f11906310f0244a4ed7231db467bf3f58ba66d9636\" returns successfully" May 9 01:14:17.589755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2156405831.mount: Deactivated successfully. May 9 01:14:18.155166 containerd[1478]: time="2025-05-09T01:14:18.155069134Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:18.156495 containerd[1478]: time="2025-05-09T01:14:18.156413015Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 9 01:14:18.158283 containerd[1478]: time="2025-05-09T01:14:18.158214705Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:18.161071 containerd[1478]: time="2025-05-09T01:14:18.161025217Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:18.162072 containerd[1478]: time="2025-05-09T01:14:18.161770795Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.496631467s" May 9 01:14:18.162072 containerd[1478]: time="2025-05-09T01:14:18.161814147Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 9 01:14:18.165038 containerd[1478]: time="2025-05-09T01:14:18.164501628Z" level=info msg="CreateContainer within sandbox \"946b4a710a12eeb781b474d3508a514492f772a40dd6392def17f94b55e6d8e0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 9 01:14:18.178438 containerd[1478]: time="2025-05-09T01:14:18.178388949Z" level=info msg="Container 24563d185a308463b04e82d6a4fdb2c55205d440f98cdaeafea67e4c10b40802: CDI devices from CRI Config.CDIDevices: []" May 9 01:14:18.191765 containerd[1478]: time="2025-05-09T01:14:18.191713103Z" level=info msg="CreateContainer within sandbox \"946b4a710a12eeb781b474d3508a514492f772a40dd6392def17f94b55e6d8e0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"24563d185a308463b04e82d6a4fdb2c55205d440f98cdaeafea67e4c10b40802\"" May 9 01:14:18.192679 containerd[1478]: time="2025-05-09T01:14:18.192407545Z" level=info msg="StartContainer for \"24563d185a308463b04e82d6a4fdb2c55205d440f98cdaeafea67e4c10b40802\"" May 9 01:14:18.193559 containerd[1478]: time="2025-05-09T01:14:18.193510654Z" level=info msg="connecting to shim 24563d185a308463b04e82d6a4fdb2c55205d440f98cdaeafea67e4c10b40802" address="unix:///run/containerd/s/af88736dd002e2250d9a67085fa8bd173c68af4fc819aacffe8db2567258af92" protocol=ttrpc version=3 May 9 01:14:18.221144 systemd[1]: Started cri-containerd-24563d185a308463b04e82d6a4fdb2c55205d440f98cdaeafea67e4c10b40802.scope - libcontainer container 24563d185a308463b04e82d6a4fdb2c55205d440f98cdaeafea67e4c10b40802. May 9 01:14:18.257186 containerd[1478]: time="2025-05-09T01:14:18.256501146Z" level=info msg="StartContainer for \"24563d185a308463b04e82d6a4fdb2c55205d440f98cdaeafea67e4c10b40802\" returns successfully" May 9 01:14:18.861429 kubelet[2813]: I0509 01:14:18.860194 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7mbfc" podStartSLOduration=4.860172003 podStartE2EDuration="4.860172003s" podCreationTimestamp="2025-05-09 01:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 01:14:15.850721477 +0000 UTC m=+17.215825956" watchObservedRunningTime="2025-05-09 01:14:18.860172003 +0000 UTC m=+20.225276482" May 9 01:14:18.862587 kubelet[2813]: I0509 01:14:18.861663 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-2mxzq" podStartSLOduration=1.362988198 podStartE2EDuration="3.861650065s" podCreationTimestamp="2025-05-09 01:14:15 +0000 UTC" firstStartedPulling="2025-05-09 01:14:15.664482455 +0000 UTC m=+17.029586924" lastFinishedPulling="2025-05-09 01:14:18.163144322 +0000 UTC m=+19.528248791" observedRunningTime="2025-05-09 01:14:18.860051887 +0000 UTC m=+20.225156356" watchObservedRunningTime="2025-05-09 01:14:18.861650065 +0000 UTC m=+20.226754564" May 9 01:14:21.459061 kubelet[2813]: I0509 01:14:21.457661 2813 topology_manager.go:215] "Topology Admit Handler" podUID="26bd38a3-f2c9-413c-b064-c394e7508ebc" podNamespace="calico-system" podName="calico-typha-588fc697b6-zp5v6" May 9 01:14:21.474821 systemd[1]: Created slice kubepods-besteffort-pod26bd38a3_f2c9_413c_b064_c394e7508ebc.slice - libcontainer container kubepods-besteffort-pod26bd38a3_f2c9_413c_b064_c394e7508ebc.slice. May 9 01:14:21.503823 kubelet[2813]: I0509 01:14:21.503776 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26bd38a3-f2c9-413c-b064-c394e7508ebc-tigera-ca-bundle\") pod \"calico-typha-588fc697b6-zp5v6\" (UID: \"26bd38a3-f2c9-413c-b064-c394e7508ebc\") " pod="calico-system/calico-typha-588fc697b6-zp5v6" May 9 01:14:21.503823 kubelet[2813]: I0509 01:14:21.503826 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/26bd38a3-f2c9-413c-b064-c394e7508ebc-typha-certs\") pod \"calico-typha-588fc697b6-zp5v6\" (UID: \"26bd38a3-f2c9-413c-b064-c394e7508ebc\") " pod="calico-system/calico-typha-588fc697b6-zp5v6" May 9 01:14:21.504193 kubelet[2813]: I0509 01:14:21.503850 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj6nc\" (UniqueName: \"kubernetes.io/projected/26bd38a3-f2c9-413c-b064-c394e7508ebc-kube-api-access-tj6nc\") pod \"calico-typha-588fc697b6-zp5v6\" (UID: \"26bd38a3-f2c9-413c-b064-c394e7508ebc\") " pod="calico-system/calico-typha-588fc697b6-zp5v6" May 9 01:14:21.571812 kubelet[2813]: I0509 01:14:21.571766 2813 topology_manager.go:215] "Topology Admit Handler" podUID="33bb8f2b-4110-4fc6-ada8-36b61bfa49be" podNamespace="calico-system" podName="calico-node-m4kn5" May 9 01:14:21.582871 systemd[1]: Created slice kubepods-besteffort-pod33bb8f2b_4110_4fc6_ada8_36b61bfa49be.slice - libcontainer container kubepods-besteffort-pod33bb8f2b_4110_4fc6_ada8_36b61bfa49be.slice. May 9 01:14:21.704518 kubelet[2813]: I0509 01:14:21.704457 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-policysync\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.704518 kubelet[2813]: I0509 01:14:21.704517 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-var-lib-calico\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.704518 kubelet[2813]: I0509 01:14:21.704539 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-cni-bin-dir\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.704805 kubelet[2813]: I0509 01:14:21.704561 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-flexvol-driver-host\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.704805 kubelet[2813]: I0509 01:14:21.704582 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-cni-log-dir\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.704805 kubelet[2813]: I0509 01:14:21.704600 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-tigera-ca-bundle\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.704805 kubelet[2813]: I0509 01:14:21.704618 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-lib-modules\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.704805 kubelet[2813]: I0509 01:14:21.704636 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-node-certs\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.704936 kubelet[2813]: I0509 01:14:21.704653 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-var-run-calico\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.704936 kubelet[2813]: I0509 01:14:21.704678 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5rnf\" (UniqueName: \"kubernetes.io/projected/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-kube-api-access-b5rnf\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.704936 kubelet[2813]: I0509 01:14:21.704697 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-cni-net-dir\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.704936 kubelet[2813]: I0509 01:14:21.704713 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33bb8f2b-4110-4fc6-ada8-36b61bfa49be-xtables-lock\") pod \"calico-node-m4kn5\" (UID: \"33bb8f2b-4110-4fc6-ada8-36b61bfa49be\") " pod="calico-system/calico-node-m4kn5" May 9 01:14:21.725052 kubelet[2813]: I0509 01:14:21.723559 2813 topology_manager.go:215] "Topology Admit Handler" podUID="4d010afc-8605-44c1-9991-fd6272876d69" podNamespace="calico-system" podName="csi-node-driver-zgbc8" May 9 01:14:21.725052 kubelet[2813]: E0509 01:14:21.723875 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgbc8" podUID="4d010afc-8605-44c1-9991-fd6272876d69" May 9 01:14:21.784445 containerd[1478]: time="2025-05-09T01:14:21.783702801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588fc697b6-zp5v6,Uid:26bd38a3-f2c9-413c-b064-c394e7508ebc,Namespace:calico-system,Attempt:0,}" May 9 01:14:21.807371 kubelet[2813]: E0509 01:14:21.807334 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.807371 kubelet[2813]: W0509 01:14:21.807360 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.807909 kubelet[2813]: E0509 01:14:21.807385 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.808815 kubelet[2813]: E0509 01:14:21.808780 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.808815 kubelet[2813]: W0509 01:14:21.808797 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.808815 kubelet[2813]: E0509 01:14:21.808809 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.810927 kubelet[2813]: E0509 01:14:21.810891 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.811156 kubelet[2813]: W0509 01:14:21.810928 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.811156 kubelet[2813]: E0509 01:14:21.810950 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.811644 kubelet[2813]: E0509 01:14:21.811606 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.811644 kubelet[2813]: W0509 01:14:21.811621 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.811644 kubelet[2813]: E0509 01:14:21.811637 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.814304 kubelet[2813]: E0509 01:14:21.814146 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.814304 kubelet[2813]: W0509 01:14:21.814211 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.814304 kubelet[2813]: E0509 01:14:21.814237 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.823631 kubelet[2813]: E0509 01:14:21.821129 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.823631 kubelet[2813]: W0509 01:14:21.821151 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.823631 kubelet[2813]: E0509 01:14:21.821171 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.838949 kubelet[2813]: E0509 01:14:21.838893 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.838949 kubelet[2813]: W0509 01:14:21.838936 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.838949 kubelet[2813]: E0509 01:14:21.838956 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.849210 containerd[1478]: time="2025-05-09T01:14:21.848061924Z" level=info msg="connecting to shim f1025b0b7fdc94845aa778ef16fb151244feb1c1645615bbbed471100f2c530c" address="unix:///run/containerd/s/86bb1009971ee5d3f6ddd71490f785c05dfe8064512b594f73402477bfb7aa76" namespace=k8s.io protocol=ttrpc version=3 May 9 01:14:21.884153 systemd[1]: Started cri-containerd-f1025b0b7fdc94845aa778ef16fb151244feb1c1645615bbbed471100f2c530c.scope - libcontainer container f1025b0b7fdc94845aa778ef16fb151244feb1c1645615bbbed471100f2c530c. May 9 01:14:21.889182 containerd[1478]: time="2025-05-09T01:14:21.889146019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m4kn5,Uid:33bb8f2b-4110-4fc6-ada8-36b61bfa49be,Namespace:calico-system,Attempt:0,}" May 9 01:14:21.906554 kubelet[2813]: E0509 01:14:21.906275 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.906554 kubelet[2813]: W0509 01:14:21.906313 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.906554 kubelet[2813]: E0509 01:14:21.906334 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.906554 kubelet[2813]: I0509 01:14:21.906364 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d010afc-8605-44c1-9991-fd6272876d69-kubelet-dir\") pod \"csi-node-driver-zgbc8\" (UID: \"4d010afc-8605-44c1-9991-fd6272876d69\") " pod="calico-system/csi-node-driver-zgbc8" May 9 01:14:21.906955 kubelet[2813]: E0509 01:14:21.906729 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.906955 kubelet[2813]: W0509 01:14:21.906743 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.906955 kubelet[2813]: E0509 01:14:21.906828 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.906955 kubelet[2813]: I0509 01:14:21.906850 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4d010afc-8605-44c1-9991-fd6272876d69-varrun\") pod \"csi-node-driver-zgbc8\" (UID: \"4d010afc-8605-44c1-9991-fd6272876d69\") " pod="calico-system/csi-node-driver-zgbc8" May 9 01:14:21.907581 kubelet[2813]: E0509 01:14:21.907552 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.907705 kubelet[2813]: W0509 01:14:21.907630 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.907705 kubelet[2813]: E0509 01:14:21.907652 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.908854 kubelet[2813]: E0509 01:14:21.908794 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.908854 kubelet[2813]: W0509 01:14:21.908811 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.908854 kubelet[2813]: E0509 01:14:21.908828 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.910022 kubelet[2813]: E0509 01:14:21.909232 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.910022 kubelet[2813]: W0509 01:14:21.909242 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.910022 kubelet[2813]: E0509 01:14:21.909258 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.910022 kubelet[2813]: I0509 01:14:21.909304 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4d010afc-8605-44c1-9991-fd6272876d69-socket-dir\") pod \"csi-node-driver-zgbc8\" (UID: \"4d010afc-8605-44c1-9991-fd6272876d69\") " pod="calico-system/csi-node-driver-zgbc8" May 9 01:14:21.910498 kubelet[2813]: E0509 01:14:21.910387 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.910498 kubelet[2813]: W0509 01:14:21.910433 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.910498 kubelet[2813]: E0509 01:14:21.910466 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.910798 kubelet[2813]: E0509 01:14:21.910775 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.910798 kubelet[2813]: W0509 01:14:21.910793 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.911056 kubelet[2813]: E0509 01:14:21.911035 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.911119 kubelet[2813]: I0509 01:14:21.911066 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4d010afc-8605-44c1-9991-fd6272876d69-registration-dir\") pod \"csi-node-driver-zgbc8\" (UID: \"4d010afc-8605-44c1-9991-fd6272876d69\") " pod="calico-system/csi-node-driver-zgbc8" May 9 01:14:21.911380 kubelet[2813]: E0509 01:14:21.911357 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.911380 kubelet[2813]: W0509 01:14:21.911378 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.911475 kubelet[2813]: E0509 01:14:21.911400 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.912791 kubelet[2813]: E0509 01:14:21.912677 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.912791 kubelet[2813]: W0509 01:14:21.912780 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.913361 kubelet[2813]: E0509 01:14:21.912794 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.913361 kubelet[2813]: E0509 01:14:21.913244 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.913361 kubelet[2813]: W0509 01:14:21.913255 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.913361 kubelet[2813]: E0509 01:14:21.913266 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.913361 kubelet[2813]: I0509 01:14:21.913287 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s666t\" (UniqueName: \"kubernetes.io/projected/4d010afc-8605-44c1-9991-fd6272876d69-kube-api-access-s666t\") pod \"csi-node-driver-zgbc8\" (UID: \"4d010afc-8605-44c1-9991-fd6272876d69\") " pod="calico-system/csi-node-driver-zgbc8" May 9 01:14:21.913687 kubelet[2813]: E0509 01:14:21.913549 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.913687 kubelet[2813]: W0509 01:14:21.913566 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.913687 kubelet[2813]: E0509 01:14:21.913578 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.914331 kubelet[2813]: E0509 01:14:21.913753 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.914331 kubelet[2813]: W0509 01:14:21.913763 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.914331 kubelet[2813]: E0509 01:14:21.913772 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.914331 kubelet[2813]: E0509 01:14:21.914086 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.914331 kubelet[2813]: W0509 01:14:21.914098 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.914331 kubelet[2813]: E0509 01:14:21.914115 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.914494 kubelet[2813]: E0509 01:14:21.914468 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.914494 kubelet[2813]: W0509 01:14:21.914478 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.914494 kubelet[2813]: E0509 01:14:21.914488 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.915152 kubelet[2813]: E0509 01:14:21.915083 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:21.915152 kubelet[2813]: W0509 01:14:21.915094 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:21.915152 kubelet[2813]: E0509 01:14:21.915105 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:21.935992 containerd[1478]: time="2025-05-09T01:14:21.934895105Z" level=info msg="connecting to shim bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523" address="unix:///run/containerd/s/5316270b9cbf2cb73afd8edb243de4febaafc6d0acfb308a846822793c4dc191" namespace=k8s.io protocol=ttrpc version=3 May 9 01:14:21.963825 systemd[1]: Started cri-containerd-bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523.scope - libcontainer container bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523. May 9 01:14:21.993486 containerd[1478]: time="2025-05-09T01:14:21.993364170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588fc697b6-zp5v6,Uid:26bd38a3-f2c9-413c-b064-c394e7508ebc,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1025b0b7fdc94845aa778ef16fb151244feb1c1645615bbbed471100f2c530c\"" May 9 01:14:21.997247 containerd[1478]: time="2025-05-09T01:14:21.997202099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 9 01:14:22.016476 kubelet[2813]: E0509 01:14:22.016419 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.016857 kubelet[2813]: W0509 01:14:22.016706 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.016857 kubelet[2813]: E0509 01:14:22.016732 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.017177 kubelet[2813]: E0509 01:14:22.017145 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.017357 kubelet[2813]: W0509 01:14:22.017274 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.017357 kubelet[2813]: E0509 01:14:22.017291 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.019880 kubelet[2813]: E0509 01:14:22.019708 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.019880 kubelet[2813]: W0509 01:14:22.019726 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.019880 kubelet[2813]: E0509 01:14:22.019738 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.020233 kubelet[2813]: E0509 01:14:22.020221 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.020469 kubelet[2813]: W0509 01:14:22.020317 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.020469 kubelet[2813]: E0509 01:14:22.020332 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.020869 kubelet[2813]: E0509 01:14:22.020857 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.021232 kubelet[2813]: W0509 01:14:22.021001 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.021232 kubelet[2813]: E0509 01:14:22.021017 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.024146 kubelet[2813]: E0509 01:14:22.022030 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.024146 kubelet[2813]: W0509 01:14:22.022065 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.024146 kubelet[2813]: E0509 01:14:22.022100 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.024146 kubelet[2813]: E0509 01:14:22.022335 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.024146 kubelet[2813]: W0509 01:14:22.022344 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.024146 kubelet[2813]: E0509 01:14:22.022430 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.024146 kubelet[2813]: E0509 01:14:22.022618 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.024146 kubelet[2813]: W0509 01:14:22.022627 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.024146 kubelet[2813]: E0509 01:14:22.022637 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.024146 kubelet[2813]: E0509 01:14:22.022952 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.024813 kubelet[2813]: W0509 01:14:22.022961 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.024813 kubelet[2813]: E0509 01:14:22.022999 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.024813 kubelet[2813]: E0509 01:14:22.024104 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.024813 kubelet[2813]: W0509 01:14:22.024116 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.024813 kubelet[2813]: E0509 01:14:22.024243 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.024813 kubelet[2813]: E0509 01:14:22.024463 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.024813 kubelet[2813]: W0509 01:14:22.024472 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.024813 kubelet[2813]: E0509 01:14:22.024483 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.024813 kubelet[2813]: E0509 01:14:22.024675 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.024813 kubelet[2813]: W0509 01:14:22.024683 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.025340 kubelet[2813]: E0509 01:14:22.024697 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.025340 kubelet[2813]: E0509 01:14:22.024892 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.025340 kubelet[2813]: W0509 01:14:22.024902 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.025340 kubelet[2813]: E0509 01:14:22.024916 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.025697 kubelet[2813]: E0509 01:14:22.025651 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.025697 kubelet[2813]: W0509 01:14:22.025666 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.025697 kubelet[2813]: E0509 01:14:22.025676 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.025919 kubelet[2813]: E0509 01:14:22.025856 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.025919 kubelet[2813]: W0509 01:14:22.025865 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.025919 kubelet[2813]: E0509 01:14:22.025875 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.026503 kubelet[2813]: E0509 01:14:22.026088 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.026503 kubelet[2813]: W0509 01:14:22.026103 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.026503 kubelet[2813]: E0509 01:14:22.026113 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.028491 kubelet[2813]: E0509 01:14:22.028196 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.028491 kubelet[2813]: W0509 01:14:22.028211 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.028491 kubelet[2813]: E0509 01:14:22.028224 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.029189 kubelet[2813]: E0509 01:14:22.029107 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.029189 kubelet[2813]: W0509 01:14:22.029125 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.029189 kubelet[2813]: E0509 01:14:22.029144 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.029425 kubelet[2813]: E0509 01:14:22.029336 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.029425 kubelet[2813]: W0509 01:14:22.029352 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.029425 kubelet[2813]: E0509 01:14:22.029362 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.029627 kubelet[2813]: E0509 01:14:22.029521 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.029627 kubelet[2813]: W0509 01:14:22.029530 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.029627 kubelet[2813]: E0509 01:14:22.029540 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.029922 kubelet[2813]: E0509 01:14:22.029668 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.029922 kubelet[2813]: W0509 01:14:22.029677 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.029922 kubelet[2813]: E0509 01:14:22.029686 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.029922 kubelet[2813]: E0509 01:14:22.029836 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.029922 kubelet[2813]: W0509 01:14:22.029846 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.029922 kubelet[2813]: E0509 01:14:22.029855 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.031137 kubelet[2813]: E0509 01:14:22.030374 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.031137 kubelet[2813]: W0509 01:14:22.030384 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.031137 kubelet[2813]: E0509 01:14:22.030394 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.031137 kubelet[2813]: E0509 01:14:22.030529 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.031137 kubelet[2813]: W0509 01:14:22.030537 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.031137 kubelet[2813]: E0509 01:14:22.030550 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.031137 kubelet[2813]: E0509 01:14:22.030862 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.031137 kubelet[2813]: W0509 01:14:22.030875 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.031137 kubelet[2813]: E0509 01:14:22.030884 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.049890 kubelet[2813]: E0509 01:14:22.049624 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:22.049890 kubelet[2813]: W0509 01:14:22.049668 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:22.049890 kubelet[2813]: E0509 01:14:22.049688 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:22.051639 containerd[1478]: time="2025-05-09T01:14:22.051597994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m4kn5,Uid:33bb8f2b-4110-4fc6-ada8-36b61bfa49be,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523\"" May 9 01:14:23.754163 kubelet[2813]: E0509 01:14:23.753937 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgbc8" podUID="4d010afc-8605-44c1-9991-fd6272876d69" May 9 01:14:25.458026 containerd[1478]: time="2025-05-09T01:14:25.457492624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:25.458938 containerd[1478]: time="2025-05-09T01:14:25.458893110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 9 01:14:25.460275 containerd[1478]: time="2025-05-09T01:14:25.460228745Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:25.465807 containerd[1478]: time="2025-05-09T01:14:25.464662373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:25.465807 containerd[1478]: time="2025-05-09T01:14:25.465108149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.467861125s" May 9 01:14:25.465807 containerd[1478]: time="2025-05-09T01:14:25.465150949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 9 01:14:25.468541 containerd[1478]: time="2025-05-09T01:14:25.468514848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 9 01:14:25.482997 containerd[1478]: time="2025-05-09T01:14:25.482484191Z" level=info msg="CreateContainer within sandbox \"f1025b0b7fdc94845aa778ef16fb151244feb1c1645615bbbed471100f2c530c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 9 01:14:25.496138 containerd[1478]: time="2025-05-09T01:14:25.496102285Z" level=info msg="Container 2553424bf3be122cc1b296f346c46d5157b214559b5cc20be9bf79b30171dc43: CDI devices from CRI Config.CDIDevices: []" May 9 01:14:25.507862 containerd[1478]: time="2025-05-09T01:14:25.507810776Z" level=info msg="CreateContainer within sandbox \"f1025b0b7fdc94845aa778ef16fb151244feb1c1645615bbbed471100f2c530c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2553424bf3be122cc1b296f346c46d5157b214559b5cc20be9bf79b30171dc43\"" May 9 01:14:25.509447 containerd[1478]: time="2025-05-09T01:14:25.509378147Z" level=info msg="StartContainer for \"2553424bf3be122cc1b296f346c46d5157b214559b5cc20be9bf79b30171dc43\"" May 9 01:14:25.511027 containerd[1478]: time="2025-05-09T01:14:25.510962609Z" level=info msg="connecting to shim 2553424bf3be122cc1b296f346c46d5157b214559b5cc20be9bf79b30171dc43" address="unix:///run/containerd/s/86bb1009971ee5d3f6ddd71490f785c05dfe8064512b594f73402477bfb7aa76" protocol=ttrpc version=3 May 9 01:14:25.541160 systemd[1]: Started cri-containerd-2553424bf3be122cc1b296f346c46d5157b214559b5cc20be9bf79b30171dc43.scope - libcontainer container 2553424bf3be122cc1b296f346c46d5157b214559b5cc20be9bf79b30171dc43. May 9 01:14:25.611134 containerd[1478]: time="2025-05-09T01:14:25.610968672Z" level=info msg="StartContainer for \"2553424bf3be122cc1b296f346c46d5157b214559b5cc20be9bf79b30171dc43\" returns successfully" May 9 01:14:25.753377 kubelet[2813]: E0509 01:14:25.753034 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgbc8" podUID="4d010afc-8605-44c1-9991-fd6272876d69" May 9 01:14:25.885992 kubelet[2813]: I0509 01:14:25.885899 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-588fc697b6-zp5v6" podStartSLOduration=1.414544479 podStartE2EDuration="4.88588011s" podCreationTimestamp="2025-05-09 01:14:21 +0000 UTC" firstStartedPulling="2025-05-09 01:14:21.99616269 +0000 UTC m=+23.361267169" lastFinishedPulling="2025-05-09 01:14:25.467498331 +0000 UTC m=+26.832602800" observedRunningTime="2025-05-09 01:14:25.885146124 +0000 UTC m=+27.250250593" watchObservedRunningTime="2025-05-09 01:14:25.88588011 +0000 UTC m=+27.250984589" May 9 01:14:25.936915 kubelet[2813]: E0509 01:14:25.936865 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.936915 kubelet[2813]: W0509 01:14:25.936895 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.936915 kubelet[2813]: E0509 01:14:25.936917 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.937196 kubelet[2813]: E0509 01:14:25.937157 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.937196 kubelet[2813]: W0509 01:14:25.937172 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.937196 kubelet[2813]: E0509 01:14:25.937181 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.937365 kubelet[2813]: E0509 01:14:25.937333 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.937365 kubelet[2813]: W0509 01:14:25.937343 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.937365 kubelet[2813]: E0509 01:14:25.937354 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.937535 kubelet[2813]: E0509 01:14:25.937517 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.937535 kubelet[2813]: W0509 01:14:25.937532 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.937616 kubelet[2813]: E0509 01:14:25.937541 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.937716 kubelet[2813]: E0509 01:14:25.937696 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.937716 kubelet[2813]: W0509 01:14:25.937712 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.937793 kubelet[2813]: E0509 01:14:25.937722 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.937891 kubelet[2813]: E0509 01:14:25.937873 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.937891 kubelet[2813]: W0509 01:14:25.937888 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.937891 kubelet[2813]: E0509 01:14:25.937897 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.938088 kubelet[2813]: E0509 01:14:25.938070 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.938088 kubelet[2813]: W0509 01:14:25.938085 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.938088 kubelet[2813]: E0509 01:14:25.938095 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.938271 kubelet[2813]: E0509 01:14:25.938254 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.938271 kubelet[2813]: W0509 01:14:25.938268 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.938349 kubelet[2813]: E0509 01:14:25.938277 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.938453 kubelet[2813]: E0509 01:14:25.938436 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.938453 kubelet[2813]: W0509 01:14:25.938451 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.938515 kubelet[2813]: E0509 01:14:25.938461 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.938628 kubelet[2813]: E0509 01:14:25.938611 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.938628 kubelet[2813]: W0509 01:14:25.938625 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.938693 kubelet[2813]: E0509 01:14:25.938635 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.938813 kubelet[2813]: E0509 01:14:25.938795 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.938813 kubelet[2813]: W0509 01:14:25.938810 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.938878 kubelet[2813]: E0509 01:14:25.938819 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.939019 kubelet[2813]: E0509 01:14:25.939000 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.939019 kubelet[2813]: W0509 01:14:25.939016 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.939089 kubelet[2813]: E0509 01:14:25.939026 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.939195 kubelet[2813]: E0509 01:14:25.939178 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.939195 kubelet[2813]: W0509 01:14:25.939192 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.939257 kubelet[2813]: E0509 01:14:25.939201 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.939361 kubelet[2813]: E0509 01:14:25.939344 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.939361 kubelet[2813]: W0509 01:14:25.939357 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.939428 kubelet[2813]: E0509 01:14:25.939368 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.939560 kubelet[2813]: E0509 01:14:25.939542 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.939560 kubelet[2813]: W0509 01:14:25.939557 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.939620 kubelet[2813]: E0509 01:14:25.939567 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.955004 kubelet[2813]: E0509 01:14:25.954763 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.955004 kubelet[2813]: W0509 01:14:25.954784 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.955004 kubelet[2813]: E0509 01:14:25.954832 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.955598 kubelet[2813]: E0509 01:14:25.955485 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.955598 kubelet[2813]: W0509 01:14:25.955497 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.955598 kubelet[2813]: E0509 01:14:25.955514 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.955954 kubelet[2813]: E0509 01:14:25.955854 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.955954 kubelet[2813]: W0509 01:14:25.955866 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.955954 kubelet[2813]: E0509 01:14:25.955883 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.956352 kubelet[2813]: E0509 01:14:25.956255 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.956352 kubelet[2813]: W0509 01:14:25.956267 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.956352 kubelet[2813]: E0509 01:14:25.956287 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.956717 kubelet[2813]: E0509 01:14:25.956669 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.956717 kubelet[2813]: W0509 01:14:25.956680 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.956717 kubelet[2813]: E0509 01:14:25.956699 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.957046 kubelet[2813]: E0509 01:14:25.957007 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.957046 kubelet[2813]: W0509 01:14:25.957041 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.957173 kubelet[2813]: E0509 01:14:25.957070 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.957524 kubelet[2813]: E0509 01:14:25.957485 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.957524 kubelet[2813]: W0509 01:14:25.957516 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.957889 kubelet[2813]: E0509 01:14:25.957608 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.957943 kubelet[2813]: E0509 01:14:25.957929 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.957943 kubelet[2813]: W0509 01:14:25.957940 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.958076 kubelet[2813]: E0509 01:14:25.957989 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.958158 kubelet[2813]: E0509 01:14:25.958140 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.958158 kubelet[2813]: W0509 01:14:25.958153 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.958350 kubelet[2813]: E0509 01:14:25.958250 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.958538 kubelet[2813]: E0509 01:14:25.958515 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.958538 kubelet[2813]: W0509 01:14:25.958529 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.958538 kubelet[2813]: E0509 01:14:25.958546 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.959089 kubelet[2813]: E0509 01:14:25.959040 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.959089 kubelet[2813]: W0509 01:14:25.959051 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.959089 kubelet[2813]: E0509 01:14:25.959069 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.959248 kubelet[2813]: E0509 01:14:25.959230 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.959248 kubelet[2813]: W0509 01:14:25.959245 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.959317 kubelet[2813]: E0509 01:14:25.959261 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.959874 kubelet[2813]: E0509 01:14:25.959652 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.959874 kubelet[2813]: W0509 01:14:25.959664 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.959874 kubelet[2813]: E0509 01:14:25.959676 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.960076 kubelet[2813]: E0509 01:14:25.960053 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.960076 kubelet[2813]: W0509 01:14:25.960069 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.960145 kubelet[2813]: E0509 01:14:25.960080 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.960440 kubelet[2813]: E0509 01:14:25.960429 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.960585 kubelet[2813]: W0509 01:14:25.960496 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.960585 kubelet[2813]: E0509 01:14:25.960510 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.960931 kubelet[2813]: E0509 01:14:25.960813 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.960931 kubelet[2813]: W0509 01:14:25.960823 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.960931 kubelet[2813]: E0509 01:14:25.960844 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.961316 kubelet[2813]: E0509 01:14:25.961172 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.961316 kubelet[2813]: W0509 01:14:25.961183 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.961316 kubelet[2813]: E0509 01:14:25.961195 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:25.961759 kubelet[2813]: E0509 01:14:25.961731 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:25.961865 kubelet[2813]: W0509 01:14:25.961814 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:25.961865 kubelet[2813]: E0509 01:14:25.961845 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.874311 kubelet[2813]: I0509 01:14:26.874255 2813 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 01:14:26.949314 kubelet[2813]: E0509 01:14:26.948390 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.949314 kubelet[2813]: W0509 01:14:26.948427 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.949314 kubelet[2813]: E0509 01:14:26.948461 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.949314 kubelet[2813]: E0509 01:14:26.948797 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.949314 kubelet[2813]: W0509 01:14:26.948817 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.949314 kubelet[2813]: E0509 01:14:26.948838 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.950184 kubelet[2813]: E0509 01:14:26.950108 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.951052 kubelet[2813]: W0509 01:14:26.950226 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.951052 kubelet[2813]: E0509 01:14:26.950273 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.951322 kubelet[2813]: E0509 01:14:26.951276 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.951421 kubelet[2813]: W0509 01:14:26.951310 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.951421 kubelet[2813]: E0509 01:14:26.951376 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.952552 kubelet[2813]: E0509 01:14:26.952285 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.952552 kubelet[2813]: W0509 01:14:26.952360 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.952552 kubelet[2813]: E0509 01:14:26.952387 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.953718 kubelet[2813]: E0509 01:14:26.953417 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.953718 kubelet[2813]: W0509 01:14:26.953448 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.953718 kubelet[2813]: E0509 01:14:26.953474 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.954639 kubelet[2813]: E0509 01:14:26.954404 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.954639 kubelet[2813]: W0509 01:14:26.954433 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.956026 kubelet[2813]: E0509 01:14:26.954457 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.956183 kubelet[2813]: E0509 01:14:26.956141 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.956183 kubelet[2813]: W0509 01:14:26.956174 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.956316 kubelet[2813]: E0509 01:14:26.956219 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.957127 kubelet[2813]: E0509 01:14:26.956913 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.957127 kubelet[2813]: W0509 01:14:26.956948 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.957127 kubelet[2813]: E0509 01:14:26.957014 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.957413 kubelet[2813]: E0509 01:14:26.957362 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.957413 kubelet[2813]: W0509 01:14:26.957394 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.957581 kubelet[2813]: E0509 01:14:26.957415 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.957914 kubelet[2813]: E0509 01:14:26.957851 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.957914 kubelet[2813]: W0509 01:14:26.957889 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.957914 kubelet[2813]: E0509 01:14:26.957913 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.958394 kubelet[2813]: E0509 01:14:26.958336 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.958394 kubelet[2813]: W0509 01:14:26.958373 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.958394 kubelet[2813]: E0509 01:14:26.958394 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.958812 kubelet[2813]: E0509 01:14:26.958754 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.958812 kubelet[2813]: W0509 01:14:26.958791 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.958812 kubelet[2813]: E0509 01:14:26.958811 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.959294 kubelet[2813]: E0509 01:14:26.959253 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.959294 kubelet[2813]: W0509 01:14:26.959287 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.959445 kubelet[2813]: E0509 01:14:26.959310 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.959690 kubelet[2813]: E0509 01:14:26.959652 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.959690 kubelet[2813]: W0509 01:14:26.959686 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.959932 kubelet[2813]: E0509 01:14:26.959847 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.964493 kubelet[2813]: E0509 01:14:26.964454 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.964493 kubelet[2813]: W0509 01:14:26.964491 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.964699 kubelet[2813]: E0509 01:14:26.964518 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.965163 kubelet[2813]: E0509 01:14:26.965118 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.965163 kubelet[2813]: W0509 01:14:26.965150 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.965538 kubelet[2813]: E0509 01:14:26.965205 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.966911 kubelet[2813]: E0509 01:14:26.966854 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.966911 kubelet[2813]: W0509 01:14:26.966893 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.967105 kubelet[2813]: E0509 01:14:26.966937 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.967582 kubelet[2813]: E0509 01:14:26.967501 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.967582 kubelet[2813]: W0509 01:14:26.967542 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.967582 kubelet[2813]: E0509 01:14:26.967578 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.968113 kubelet[2813]: E0509 01:14:26.968077 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.968113 kubelet[2813]: W0509 01:14:26.968109 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.968257 kubelet[2813]: E0509 01:14:26.968235 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.969621 kubelet[2813]: E0509 01:14:26.968683 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.969621 kubelet[2813]: W0509 01:14:26.968704 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.969621 kubelet[2813]: E0509 01:14:26.968897 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.969621 kubelet[2813]: E0509 01:14:26.969215 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.969621 kubelet[2813]: W0509 01:14:26.969236 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.969621 kubelet[2813]: E0509 01:14:26.969363 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.970070 kubelet[2813]: E0509 01:14:26.969742 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.970070 kubelet[2813]: W0509 01:14:26.969762 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.970070 kubelet[2813]: E0509 01:14:26.969859 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.970294 kubelet[2813]: E0509 01:14:26.970270 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.970294 kubelet[2813]: W0509 01:14:26.970291 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.970429 kubelet[2813]: E0509 01:14:26.970323 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.970858 kubelet[2813]: E0509 01:14:26.970792 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.970858 kubelet[2813]: W0509 01:14:26.970831 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.971143 kubelet[2813]: E0509 01:14:26.970955 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.971675 kubelet[2813]: E0509 01:14:26.971610 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.971675 kubelet[2813]: W0509 01:14:26.971646 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.972076 kubelet[2813]: E0509 01:14:26.971944 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.972161 kubelet[2813]: E0509 01:14:26.972117 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.972161 kubelet[2813]: W0509 01:14:26.972135 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.972400 kubelet[2813]: E0509 01:14:26.972283 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.972626 kubelet[2813]: E0509 01:14:26.972534 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.972626 kubelet[2813]: W0509 01:14:26.972599 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.972949 kubelet[2813]: E0509 01:14:26.972886 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.974063 kubelet[2813]: E0509 01:14:26.973896 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.974063 kubelet[2813]: W0509 01:14:26.973940 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.974063 kubelet[2813]: E0509 01:14:26.973967 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.976854 kubelet[2813]: E0509 01:14:26.976246 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.976854 kubelet[2813]: W0509 01:14:26.976291 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.976854 kubelet[2813]: E0509 01:14:26.976330 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.977584 kubelet[2813]: E0509 01:14:26.977536 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.978067 kubelet[2813]: W0509 01:14:26.977763 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.978067 kubelet[2813]: E0509 01:14:26.977806 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.978567 kubelet[2813]: E0509 01:14:26.978527 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.979046 kubelet[2813]: W0509 01:14:26.978782 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.979046 kubelet[2813]: E0509 01:14:26.978855 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:26.980034 kubelet[2813]: E0509 01:14:26.979606 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:14:26.980034 kubelet[2813]: W0509 01:14:26.979634 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:14:26.980034 kubelet[2813]: E0509 01:14:26.979659 2813 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:14:27.519805 containerd[1478]: time="2025-05-09T01:14:27.519750912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:27.521456 containerd[1478]: time="2025-05-09T01:14:27.521402369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 9 01:14:27.523002 containerd[1478]: time="2025-05-09T01:14:27.522955132Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:27.531565 containerd[1478]: time="2025-05-09T01:14:27.531437632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:27.533454 containerd[1478]: time="2025-05-09T01:14:27.533271503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.064618345s" May 9 01:14:27.533454 containerd[1478]: time="2025-05-09T01:14:27.533324292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 9 01:14:27.537368 containerd[1478]: time="2025-05-09T01:14:27.537327700Z" level=info msg="CreateContainer within sandbox \"bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 9 01:14:27.551908 containerd[1478]: time="2025-05-09T01:14:27.551175264Z" level=info msg="Container 7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55: CDI devices from CRI Config.CDIDevices: []" May 9 01:14:27.557793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1097424225.mount: Deactivated successfully. May 9 01:14:27.565339 containerd[1478]: time="2025-05-09T01:14:27.565261967Z" level=info msg="CreateContainer within sandbox \"bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55\"" May 9 01:14:27.568197 containerd[1478]: time="2025-05-09T01:14:27.568155203Z" level=info msg="StartContainer for \"7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55\"" May 9 01:14:27.569922 containerd[1478]: time="2025-05-09T01:14:27.569876883Z" level=info msg="connecting to shim 7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55" address="unix:///run/containerd/s/5316270b9cbf2cb73afd8edb243de4febaafc6d0acfb308a846822793c4dc191" protocol=ttrpc version=3 May 9 01:14:27.604166 systemd[1]: Started cri-containerd-7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55.scope - libcontainer container 7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55. May 9 01:14:27.660651 containerd[1478]: time="2025-05-09T01:14:27.660609039Z" level=info msg="StartContainer for \"7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55\" returns successfully" May 9 01:14:27.671360 systemd[1]: cri-containerd-7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55.scope: Deactivated successfully. May 9 01:14:27.674389 containerd[1478]: time="2025-05-09T01:14:27.673927400Z" level=info msg="received exit event container_id:\"7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55\" id:\"7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55\" pid:3448 exited_at:{seconds:1746753267 nanos:673483197}" May 9 01:14:27.674491 containerd[1478]: time="2025-05-09T01:14:27.674403633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55\" id:\"7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55\" pid:3448 exited_at:{seconds:1746753267 nanos:673483197}" May 9 01:14:27.698955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55-rootfs.mount: Deactivated successfully. May 9 01:14:27.753968 kubelet[2813]: E0509 01:14:27.753817 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgbc8" podUID="4d010afc-8605-44c1-9991-fd6272876d69" May 9 01:14:28.908552 containerd[1478]: time="2025-05-09T01:14:28.908076918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 9 01:14:29.754033 kubelet[2813]: E0509 01:14:29.753875 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgbc8" podUID="4d010afc-8605-44c1-9991-fd6272876d69" May 9 01:14:31.756082 kubelet[2813]: E0509 01:14:31.753680 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgbc8" podUID="4d010afc-8605-44c1-9991-fd6272876d69" May 9 01:14:33.753802 kubelet[2813]: E0509 01:14:33.752992 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgbc8" podUID="4d010afc-8605-44c1-9991-fd6272876d69" May 9 01:14:35.002526 containerd[1478]: time="2025-05-09T01:14:35.002479177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:35.004502 containerd[1478]: time="2025-05-09T01:14:35.004432924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 9 01:14:35.007243 containerd[1478]: time="2025-05-09T01:14:35.005933766Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:35.008400 containerd[1478]: time="2025-05-09T01:14:35.008364713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:35.009384 containerd[1478]: time="2025-05-09T01:14:35.009355903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.101188776s" May 9 01:14:35.009512 containerd[1478]: time="2025-05-09T01:14:35.009480018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 9 01:14:35.012069 containerd[1478]: time="2025-05-09T01:14:35.012030100Z" level=info msg="CreateContainer within sandbox \"bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 01:14:35.023344 containerd[1478]: time="2025-05-09T01:14:35.023301929Z" level=info msg="Container 3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625: CDI devices from CRI Config.CDIDevices: []" May 9 01:14:35.026826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774643713.mount: Deactivated successfully. May 9 01:14:35.042051 containerd[1478]: time="2025-05-09T01:14:35.041998409Z" level=info msg="CreateContainer within sandbox \"bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625\"" May 9 01:14:35.042941 containerd[1478]: time="2025-05-09T01:14:35.042880062Z" level=info msg="StartContainer for \"3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625\"" May 9 01:14:35.044735 containerd[1478]: time="2025-05-09T01:14:35.044680239Z" level=info msg="connecting to shim 3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625" address="unix:///run/containerd/s/5316270b9cbf2cb73afd8edb243de4febaafc6d0acfb308a846822793c4dc191" protocol=ttrpc version=3 May 9 01:14:35.071765 systemd[1]: Started cri-containerd-3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625.scope - libcontainer container 3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625. May 9 01:14:35.122911 containerd[1478]: time="2025-05-09T01:14:35.122287923Z" level=info msg="StartContainer for \"3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625\" returns successfully" May 9 01:14:35.755256 kubelet[2813]: E0509 01:14:35.754113 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zgbc8" podUID="4d010afc-8605-44c1-9991-fd6272876d69" May 9 01:14:36.266634 systemd[1]: cri-containerd-3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625.scope: Deactivated successfully. May 9 01:14:36.274274 systemd[1]: cri-containerd-3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625.scope: Consumed 685ms CPU time, 176.3M memory peak, 154M written to disk. May 9 01:14:36.276556 containerd[1478]: time="2025-05-09T01:14:36.272954877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625\" id:\"3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625\" pid:3509 exited_at:{seconds:1746753276 nanos:271993404}" May 9 01:14:36.276556 containerd[1478]: time="2025-05-09T01:14:36.273167619Z" level=info msg="received exit event container_id:\"3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625\" id:\"3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625\" pid:3509 exited_at:{seconds:1746753276 nanos:271993404}" May 9 01:14:36.314254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625-rootfs.mount: Deactivated successfully. May 9 01:14:36.339009 kubelet[2813]: I0509 01:14:36.335480 2813 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 01:14:36.849030 kubelet[2813]: I0509 01:14:36.848226 2813 topology_manager.go:215] "Topology Admit Handler" podUID="f9d20423-e293-4efa-8bcc-bf0ab9f333b5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9p94z" May 9 01:14:36.863382 kubelet[2813]: I0509 01:14:36.860890 2813 topology_manager.go:215] "Topology Admit Handler" podUID="f44564ce-4f13-4505-b7e9-09ffdcbb2bc0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jbv4v" May 9 01:14:36.867331 kubelet[2813]: I0509 01:14:36.865729 2813 topology_manager.go:215] "Topology Admit Handler" podUID="23a466ce-0281-447f-9cb4-09d9798f1f80" podNamespace="calico-system" podName="calico-kube-controllers-96d7d7cd5-lf972" May 9 01:14:36.868688 kubelet[2813]: I0509 01:14:36.862128 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxmzw\" (UniqueName: \"kubernetes.io/projected/f9d20423-e293-4efa-8bcc-bf0ab9f333b5-kube-api-access-sxmzw\") pod \"coredns-7db6d8ff4d-9p94z\" (UID: \"f9d20423-e293-4efa-8bcc-bf0ab9f333b5\") " pod="kube-system/coredns-7db6d8ff4d-9p94z" May 9 01:14:36.868688 kubelet[2813]: I0509 01:14:36.868555 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9d20423-e293-4efa-8bcc-bf0ab9f333b5-config-volume\") pod \"coredns-7db6d8ff4d-9p94z\" (UID: \"f9d20423-e293-4efa-8bcc-bf0ab9f333b5\") " pod="kube-system/coredns-7db6d8ff4d-9p94z" May 9 01:14:36.872912 kubelet[2813]: I0509 01:14:36.872835 2813 topology_manager.go:215] "Topology Admit Handler" podUID="ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638" podNamespace="calico-apiserver" podName="calico-apiserver-98b7b8ffb-8f46k" May 9 01:14:36.883038 kubelet[2813]: I0509 01:14:36.882455 2813 topology_manager.go:215] "Topology Admit Handler" podUID="1ac9be78-6b39-45f7-8e12-a5b27024ca27" podNamespace="calico-apiserver" podName="calico-apiserver-98b7b8ffb-59pkm" May 9 01:14:36.893405 systemd[1]: Created slice kubepods-burstable-podf9d20423_e293_4efa_8bcc_bf0ab9f333b5.slice - libcontainer container kubepods-burstable-podf9d20423_e293_4efa_8bcc_bf0ab9f333b5.slice. May 9 01:14:36.914338 systemd[1]: Created slice kubepods-besteffort-pod23a466ce_0281_447f_9cb4_09d9798f1f80.slice - libcontainer container kubepods-besteffort-pod23a466ce_0281_447f_9cb4_09d9798f1f80.slice. May 9 01:14:36.924099 systemd[1]: Created slice kubepods-burstable-podf44564ce_4f13_4505_b7e9_09ffdcbb2bc0.slice - libcontainer container kubepods-burstable-podf44564ce_4f13_4505_b7e9_09ffdcbb2bc0.slice. May 9 01:14:36.931027 systemd[1]: Created slice kubepods-besteffort-podffa3bd38_ea0e_49b2_933b_5ba9b1ba7638.slice - libcontainer container kubepods-besteffort-podffa3bd38_ea0e_49b2_933b_5ba9b1ba7638.slice. May 9 01:14:36.936898 systemd[1]: Created slice kubepods-besteffort-pod1ac9be78_6b39_45f7_8e12_a5b27024ca27.slice - libcontainer container kubepods-besteffort-pod1ac9be78_6b39_45f7_8e12_a5b27024ca27.slice. May 9 01:14:36.969053 kubelet[2813]: I0509 01:14:36.969011 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9nh8\" (UniqueName: \"kubernetes.io/projected/f44564ce-4f13-4505-b7e9-09ffdcbb2bc0-kube-api-access-q9nh8\") pod \"coredns-7db6d8ff4d-jbv4v\" (UID: \"f44564ce-4f13-4505-b7e9-09ffdcbb2bc0\") " pod="kube-system/coredns-7db6d8ff4d-jbv4v" May 9 01:14:36.969175 kubelet[2813]: I0509 01:14:36.969062 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638-calico-apiserver-certs\") pod \"calico-apiserver-98b7b8ffb-8f46k\" (UID: \"ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638\") " pod="calico-apiserver/calico-apiserver-98b7b8ffb-8f46k" May 9 01:14:36.969175 kubelet[2813]: I0509 01:14:36.969084 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kxk7\" (UniqueName: \"kubernetes.io/projected/1ac9be78-6b39-45f7-8e12-a5b27024ca27-kube-api-access-6kxk7\") pod \"calico-apiserver-98b7b8ffb-59pkm\" (UID: \"1ac9be78-6b39-45f7-8e12-a5b27024ca27\") " pod="calico-apiserver/calico-apiserver-98b7b8ffb-59pkm" May 9 01:14:36.969175 kubelet[2813]: I0509 01:14:36.969112 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23a466ce-0281-447f-9cb4-09d9798f1f80-tigera-ca-bundle\") pod \"calico-kube-controllers-96d7d7cd5-lf972\" (UID: \"23a466ce-0281-447f-9cb4-09d9798f1f80\") " pod="calico-system/calico-kube-controllers-96d7d7cd5-lf972" May 9 01:14:36.969175 kubelet[2813]: I0509 01:14:36.969171 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnnkg\" (UniqueName: \"kubernetes.io/projected/ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638-kube-api-access-gnnkg\") pod \"calico-apiserver-98b7b8ffb-8f46k\" (UID: \"ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638\") " pod="calico-apiserver/calico-apiserver-98b7b8ffb-8f46k" May 9 01:14:36.969287 kubelet[2813]: I0509 01:14:36.969193 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1ac9be78-6b39-45f7-8e12-a5b27024ca27-calico-apiserver-certs\") pod \"calico-apiserver-98b7b8ffb-59pkm\" (UID: \"1ac9be78-6b39-45f7-8e12-a5b27024ca27\") " pod="calico-apiserver/calico-apiserver-98b7b8ffb-59pkm" May 9 01:14:36.969287 kubelet[2813]: I0509 01:14:36.969226 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bklrt\" (UniqueName: \"kubernetes.io/projected/23a466ce-0281-447f-9cb4-09d9798f1f80-kube-api-access-bklrt\") pod \"calico-kube-controllers-96d7d7cd5-lf972\" (UID: \"23a466ce-0281-447f-9cb4-09d9798f1f80\") " pod="calico-system/calico-kube-controllers-96d7d7cd5-lf972" May 9 01:14:36.969287 kubelet[2813]: I0509 01:14:36.969248 2813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f44564ce-4f13-4505-b7e9-09ffdcbb2bc0-config-volume\") pod \"coredns-7db6d8ff4d-jbv4v\" (UID: \"f44564ce-4f13-4505-b7e9-09ffdcbb2bc0\") " pod="kube-system/coredns-7db6d8ff4d-jbv4v" May 9 01:14:37.209436 containerd[1478]: time="2025-05-09T01:14:37.209373518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9p94z,Uid:f9d20423-e293-4efa-8bcc-bf0ab9f333b5,Namespace:kube-system,Attempt:0,}" May 9 01:14:37.241596 containerd[1478]: time="2025-05-09T01:14:37.241278001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-98b7b8ffb-8f46k,Uid:ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638,Namespace:calico-apiserver,Attempt:0,}" May 9 01:14:37.349955 containerd[1478]: time="2025-05-09T01:14:37.349866547Z" level=error msg="Failed to destroy network for sandbox \"161e89b40239fd60554a67a8ecca5c10a65f7fb4b6ba8a85316f38f1ead2b8b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.352979 systemd[1]: run-netns-cni\x2d0f9d30f8\x2d487d\x2debec\x2d3861\x2dfa03ba30f180.mount: Deactivated successfully. May 9 01:14:37.355297 containerd[1478]: time="2025-05-09T01:14:37.355223933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-98b7b8ffb-8f46k,Uid:ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"161e89b40239fd60554a67a8ecca5c10a65f7fb4b6ba8a85316f38f1ead2b8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.355540 kubelet[2813]: E0509 01:14:37.355495 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"161e89b40239fd60554a67a8ecca5c10a65f7fb4b6ba8a85316f38f1ead2b8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.355613 kubelet[2813]: E0509 01:14:37.355576 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"161e89b40239fd60554a67a8ecca5c10a65f7fb4b6ba8a85316f38f1ead2b8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-98b7b8ffb-8f46k" May 9 01:14:37.355613 kubelet[2813]: E0509 01:14:37.355601 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"161e89b40239fd60554a67a8ecca5c10a65f7fb4b6ba8a85316f38f1ead2b8b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-98b7b8ffb-8f46k" May 9 01:14:37.355678 kubelet[2813]: E0509 01:14:37.355652 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-98b7b8ffb-8f46k_calico-apiserver(ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-98b7b8ffb-8f46k_calico-apiserver(ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"161e89b40239fd60554a67a8ecca5c10a65f7fb4b6ba8a85316f38f1ead2b8b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-98b7b8ffb-8f46k" podUID="ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638" May 9 01:14:37.357734 containerd[1478]: time="2025-05-09T01:14:37.356988692Z" level=error msg="Failed to destroy network for sandbox \"851aecbf616a3ec659e3bbd6df19d0818e1be5b75664cbf1e41067ba6b8ec198\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.361222 systemd[1]: run-netns-cni\x2d1dcd1437\x2d731b\x2d58f1\x2d2f24\x2d9e9f9014f29f.mount: Deactivated successfully. May 9 01:14:37.361533 containerd[1478]: time="2025-05-09T01:14:37.361452372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9p94z,Uid:f9d20423-e293-4efa-8bcc-bf0ab9f333b5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"851aecbf616a3ec659e3bbd6df19d0818e1be5b75664cbf1e41067ba6b8ec198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.361886 kubelet[2813]: E0509 01:14:37.361847 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851aecbf616a3ec659e3bbd6df19d0818e1be5b75664cbf1e41067ba6b8ec198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.361957 kubelet[2813]: E0509 01:14:37.361902 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851aecbf616a3ec659e3bbd6df19d0818e1be5b75664cbf1e41067ba6b8ec198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9p94z" May 9 01:14:37.361957 kubelet[2813]: E0509 01:14:37.361925 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"851aecbf616a3ec659e3bbd6df19d0818e1be5b75664cbf1e41067ba6b8ec198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9p94z" May 9 01:14:37.362057 kubelet[2813]: E0509 01:14:37.361968 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9p94z_kube-system(f9d20423-e293-4efa-8bcc-bf0ab9f333b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9p94z_kube-system(f9d20423-e293-4efa-8bcc-bf0ab9f333b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"851aecbf616a3ec659e3bbd6df19d0818e1be5b75664cbf1e41067ba6b8ec198\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9p94z" podUID="f9d20423-e293-4efa-8bcc-bf0ab9f333b5" May 9 01:14:37.522618 containerd[1478]: time="2025-05-09T01:14:37.522157257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96d7d7cd5-lf972,Uid:23a466ce-0281-447f-9cb4-09d9798f1f80,Namespace:calico-system,Attempt:0,}" May 9 01:14:37.532908 containerd[1478]: time="2025-05-09T01:14:37.532834979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jbv4v,Uid:f44564ce-4f13-4505-b7e9-09ffdcbb2bc0,Namespace:kube-system,Attempt:0,}" May 9 01:14:37.542199 containerd[1478]: time="2025-05-09T01:14:37.542166621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-98b7b8ffb-59pkm,Uid:1ac9be78-6b39-45f7-8e12-a5b27024ca27,Namespace:calico-apiserver,Attempt:0,}" May 9 01:14:37.609128 containerd[1478]: time="2025-05-09T01:14:37.609078640Z" level=error msg="Failed to destroy network for sandbox \"7f4f09984501fee5d76e7fff1a65edbfa7bb258237fd0968ea1073a3283b9995\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.614338 containerd[1478]: time="2025-05-09T01:14:37.613902751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96d7d7cd5-lf972,Uid:23a466ce-0281-447f-9cb4-09d9798f1f80,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f4f09984501fee5d76e7fff1a65edbfa7bb258237fd0968ea1073a3283b9995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.615598 kubelet[2813]: E0509 01:14:37.614690 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f4f09984501fee5d76e7fff1a65edbfa7bb258237fd0968ea1073a3283b9995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.615598 kubelet[2813]: E0509 01:14:37.614759 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f4f09984501fee5d76e7fff1a65edbfa7bb258237fd0968ea1073a3283b9995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-96d7d7cd5-lf972" May 9 01:14:37.615598 kubelet[2813]: E0509 01:14:37.614783 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f4f09984501fee5d76e7fff1a65edbfa7bb258237fd0968ea1073a3283b9995\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-96d7d7cd5-lf972" May 9 01:14:37.615727 kubelet[2813]: E0509 01:14:37.614836 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-96d7d7cd5-lf972_calico-system(23a466ce-0281-447f-9cb4-09d9798f1f80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-96d7d7cd5-lf972_calico-system(23a466ce-0281-447f-9cb4-09d9798f1f80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f4f09984501fee5d76e7fff1a65edbfa7bb258237fd0968ea1073a3283b9995\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-96d7d7cd5-lf972" podUID="23a466ce-0281-447f-9cb4-09d9798f1f80" May 9 01:14:37.656538 containerd[1478]: time="2025-05-09T01:14:37.656483452Z" level=error msg="Failed to destroy network for sandbox \"425b297f43341288732b6bb3fe5fbc9e870a62f1bdcdfdc1441591e50d34a756\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.658909 containerd[1478]: time="2025-05-09T01:14:37.658832694Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jbv4v,Uid:f44564ce-4f13-4505-b7e9-09ffdcbb2bc0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"425b297f43341288732b6bb3fe5fbc9e870a62f1bdcdfdc1441591e50d34a756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.659286 kubelet[2813]: E0509 01:14:37.659099 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425b297f43341288732b6bb3fe5fbc9e870a62f1bdcdfdc1441591e50d34a756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.659286 kubelet[2813]: E0509 01:14:37.659166 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425b297f43341288732b6bb3fe5fbc9e870a62f1bdcdfdc1441591e50d34a756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jbv4v" May 9 01:14:37.659286 kubelet[2813]: E0509 01:14:37.659194 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425b297f43341288732b6bb3fe5fbc9e870a62f1bdcdfdc1441591e50d34a756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-jbv4v" May 9 01:14:37.659582 kubelet[2813]: E0509 01:14:37.659247 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jbv4v_kube-system(f44564ce-4f13-4505-b7e9-09ffdcbb2bc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jbv4v_kube-system(f44564ce-4f13-4505-b7e9-09ffdcbb2bc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"425b297f43341288732b6bb3fe5fbc9e870a62f1bdcdfdc1441591e50d34a756\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-jbv4v" podUID="f44564ce-4f13-4505-b7e9-09ffdcbb2bc0" May 9 01:14:37.662876 containerd[1478]: time="2025-05-09T01:14:37.662829002Z" level=error msg="Failed to destroy network for sandbox \"aa867f4eec95bf4b0a5d20b03eac4061064fd26ad6f8068435f6a360f88dda73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.664713 containerd[1478]: time="2025-05-09T01:14:37.664653003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-98b7b8ffb-59pkm,Uid:1ac9be78-6b39-45f7-8e12-a5b27024ca27,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa867f4eec95bf4b0a5d20b03eac4061064fd26ad6f8068435f6a360f88dda73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.665281 kubelet[2813]: E0509 01:14:37.664915 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa867f4eec95bf4b0a5d20b03eac4061064fd26ad6f8068435f6a360f88dda73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.665281 kubelet[2813]: E0509 01:14:37.665065 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa867f4eec95bf4b0a5d20b03eac4061064fd26ad6f8068435f6a360f88dda73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-98b7b8ffb-59pkm" May 9 01:14:37.665281 kubelet[2813]: E0509 01:14:37.665129 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa867f4eec95bf4b0a5d20b03eac4061064fd26ad6f8068435f6a360f88dda73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-98b7b8ffb-59pkm" May 9 01:14:37.665393 kubelet[2813]: E0509 01:14:37.665184 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-98b7b8ffb-59pkm_calico-apiserver(1ac9be78-6b39-45f7-8e12-a5b27024ca27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-98b7b8ffb-59pkm_calico-apiserver(1ac9be78-6b39-45f7-8e12-a5b27024ca27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa867f4eec95bf4b0a5d20b03eac4061064fd26ad6f8068435f6a360f88dda73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-98b7b8ffb-59pkm" podUID="1ac9be78-6b39-45f7-8e12-a5b27024ca27" May 9 01:14:37.766613 systemd[1]: Created slice kubepods-besteffort-pod4d010afc_8605_44c1_9991_fd6272876d69.slice - libcontainer container kubepods-besteffort-pod4d010afc_8605_44c1_9991_fd6272876d69.slice. May 9 01:14:37.771411 containerd[1478]: time="2025-05-09T01:14:37.771338819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgbc8,Uid:4d010afc-8605-44c1-9991-fd6272876d69,Namespace:calico-system,Attempt:0,}" May 9 01:14:37.838161 containerd[1478]: time="2025-05-09T01:14:37.837409972Z" level=error msg="Failed to destroy network for sandbox \"2b6cf85472702e9ca588f46b5d31ef05bc347ba099abaf5f8ceb560659b1d433\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.840297 containerd[1478]: time="2025-05-09T01:14:37.840162905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgbc8,Uid:4d010afc-8605-44c1-9991-fd6272876d69,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6cf85472702e9ca588f46b5d31ef05bc347ba099abaf5f8ceb560659b1d433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.840941 kubelet[2813]: E0509 01:14:37.840532 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6cf85472702e9ca588f46b5d31ef05bc347ba099abaf5f8ceb560659b1d433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:14:37.840941 kubelet[2813]: E0509 01:14:37.840597 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6cf85472702e9ca588f46b5d31ef05bc347ba099abaf5f8ceb560659b1d433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgbc8" May 9 01:14:37.840941 kubelet[2813]: E0509 01:14:37.840626 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b6cf85472702e9ca588f46b5d31ef05bc347ba099abaf5f8ceb560659b1d433\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zgbc8" May 9 01:14:37.841084 kubelet[2813]: E0509 01:14:37.840683 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zgbc8_calico-system(4d010afc-8605-44c1-9991-fd6272876d69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zgbc8_calico-system(4d010afc-8605-44c1-9991-fd6272876d69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b6cf85472702e9ca588f46b5d31ef05bc347ba099abaf5f8ceb560659b1d433\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zgbc8" podUID="4d010afc-8605-44c1-9991-fd6272876d69" May 9 01:14:37.949957 containerd[1478]: time="2025-05-09T01:14:37.949899336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 9 01:14:38.318084 systemd[1]: run-netns-cni\x2d851cf0a3\x2daeff\x2d0715\x2d8e12\x2d825a555423c4.mount: Deactivated successfully. May 9 01:14:38.318330 systemd[1]: run-netns-cni\x2d24911ff6\x2dccc6\x2d890a\x2d2899\x2dc6b32fbb8060.mount: Deactivated successfully. May 9 01:14:46.745616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423186352.mount: Deactivated successfully. May 9 01:14:46.872497 containerd[1478]: time="2025-05-09T01:14:46.872385200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:46.874142 containerd[1478]: time="2025-05-09T01:14:46.874046530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 9 01:14:46.875505 containerd[1478]: time="2025-05-09T01:14:46.875342391Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:46.878052 containerd[1478]: time="2025-05-09T01:14:46.877909688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:14:46.879008 containerd[1478]: time="2025-05-09T01:14:46.878551897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.928581959s" May 9 01:14:46.879008 containerd[1478]: time="2025-05-09T01:14:46.878603155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 9 01:14:46.896308 containerd[1478]: time="2025-05-09T01:14:46.896219133Z" level=info msg="CreateContainer within sandbox \"bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 9 01:14:46.923187 containerd[1478]: time="2025-05-09T01:14:46.923142226Z" level=info msg="Container 3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73: CDI devices from CRI Config.CDIDevices: []" May 9 01:14:46.944937 containerd[1478]: time="2025-05-09T01:14:46.944897962Z" level=info msg="CreateContainer within sandbox \"bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\"" May 9 01:14:46.947729 containerd[1478]: time="2025-05-09T01:14:46.946000009Z" level=info msg="StartContainer for \"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\"" May 9 01:14:46.947729 containerd[1478]: time="2025-05-09T01:14:46.947643345Z" level=info msg="connecting to shim 3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73" address="unix:///run/containerd/s/5316270b9cbf2cb73afd8edb243de4febaafc6d0acfb308a846822793c4dc191" protocol=ttrpc version=3 May 9 01:14:47.002308 systemd[1]: Started cri-containerd-3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73.scope - libcontainer container 3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73. May 9 01:14:47.062315 containerd[1478]: time="2025-05-09T01:14:47.062258975Z" level=info msg="StartContainer for \"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" returns successfully" May 9 01:14:47.156200 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 9 01:14:47.156326 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 9 01:14:48.091955 kubelet[2813]: I0509 01:14:48.090877 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m4kn5" podStartSLOduration=2.264165838 podStartE2EDuration="27.090852501s" podCreationTimestamp="2025-05-09 01:14:21 +0000 UTC" firstStartedPulling="2025-05-09 01:14:22.05327496 +0000 UTC m=+23.418379429" lastFinishedPulling="2025-05-09 01:14:46.879961623 +0000 UTC m=+48.245066092" observedRunningTime="2025-05-09 01:14:48.068201246 +0000 UTC m=+49.433305775" watchObservedRunningTime="2025-05-09 01:14:48.090852501 +0000 UTC m=+49.455956980" May 9 01:14:48.163337 containerd[1478]: time="2025-05-09T01:14:48.163294318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"50cea2bdcddd6ed77f8c8e5478dd690f9015d21959ed2d073a0db46833e6448e\" pid:3809 exit_status:1 exited_at:{seconds:1746753288 nanos:162684860}" May 9 01:14:48.760641 containerd[1478]: time="2025-05-09T01:14:48.760591654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96d7d7cd5-lf972,Uid:23a466ce-0281-447f-9cb4-09d9798f1f80,Namespace:calico-system,Attempt:0,}" May 9 01:14:48.768230 containerd[1478]: time="2025-05-09T01:14:48.768173124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9p94z,Uid:f9d20423-e293-4efa-8bcc-bf0ab9f333b5,Namespace:kube-system,Attempt:0,}" May 9 01:14:48.907188 kernel: bpftool[3969]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 9 01:14:49.101964 systemd-networkd[1396]: calidd4d534212a: Link UP May 9 01:14:49.102190 systemd-networkd[1396]: calidd4d534212a: Gained carrier May 9 01:14:49.132836 systemd-networkd[1396]: calic03e6c06174: Link UP May 9 01:14:49.133047 systemd-networkd[1396]: calic03e6c06174: Gained carrier May 9 01:14:49.189892 containerd[1478]: 2025-05-09 01:14:48.892 [INFO][3933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0 calico-kube-controllers-96d7d7cd5- calico-system 23a466ce-0281-447f-9cb4-09d9798f1f80 688 0 2025-05-09 01:14:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:96d7d7cd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284-0-0-n-58e4f3488e.novalocal calico-kube-controllers-96d7d7cd5-lf972 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidd4d534212a [] []}} ContainerID="0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" Namespace="calico-system" Pod="calico-kube-controllers-96d7d7cd5-lf972" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-" May 9 01:14:49.189892 containerd[1478]: 2025-05-09 01:14:48.892 [INFO][3933] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" Namespace="calico-system" Pod="calico-kube-controllers-96d7d7cd5-lf972" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0" May 9 01:14:49.189892 containerd[1478]: 2025-05-09 01:14:48.946 [INFO][3974] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" HandleID="k8s-pod-network.0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" Workload="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0" May 9 01:14:49.190515 containerd[1478]: 2025-05-09 01:14:48.962 [INFO][3974] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" HandleID="k8s-pod-network.0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" Workload="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031d740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284-0-0-n-58e4f3488e.novalocal", "pod":"calico-kube-controllers-96d7d7cd5-lf972", "timestamp":"2025-05-09 01:14:48.945350526 +0000 UTC"}, Hostname:"ci-4284-0-0-n-58e4f3488e.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 01:14:49.190515 containerd[1478]: 2025-05-09 01:14:48.962 [INFO][3974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 01:14:49.190515 containerd[1478]: 2025-05-09 01:14:48.963 [INFO][3974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 01:14:49.190515 containerd[1478]: 2025-05-09 01:14:48.963 [INFO][3974] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-58e4f3488e.novalocal' May 9 01:14:49.190515 containerd[1478]: 2025-05-09 01:14:48.967 [INFO][3974] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.190515 containerd[1478]: 2025-05-09 01:14:48.977 [INFO][3974] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.190515 containerd[1478]: 2025-05-09 01:14:48.989 [INFO][3974] ipam/ipam.go 489: Trying affinity for 192.168.63.192/26 host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.190515 containerd[1478]: 2025-05-09 01:14:48.992 [INFO][3974] ipam/ipam.go 155: Attempting to load block cidr=192.168.63.192/26 host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.190515 containerd[1478]: 2025-05-09 01:14:48.996 [INFO][3974] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.192/26 host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.190796 containerd[1478]: 2025-05-09 01:14:48.996 [INFO][3974] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.192/26 handle="k8s-pod-network.0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.190796 containerd[1478]: 2025-05-09 01:14:48.998 [INFO][3974] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8 May 9 01:14:49.190796 containerd[1478]: 2025-05-09 01:14:49.026 [INFO][3974] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.63.192/26 handle="k8s-pod-network.0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.190796 containerd[1478]: 2025-05-09 01:14:49.036 [INFO][3974] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.63.193/26] block=192.168.63.192/26 handle="k8s-pod-network.0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.190796 containerd[1478]: 2025-05-09 01:14:49.037 [INFO][3974] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.193/26] handle="k8s-pod-network.0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.190796 containerd[1478]: 2025-05-09 01:14:49.037 [INFO][3974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 01:14:49.190796 containerd[1478]: 2025-05-09 01:14:49.037 [INFO][3974] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.193/26] IPv6=[] ContainerID="0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" HandleID="k8s-pod-network.0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" Workload="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0" May 9 01:14:49.191012 containerd[1478]: 2025-05-09 01:14:49.040 [INFO][3933] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" Namespace="calico-system" Pod="calico-kube-controllers-96d7d7cd5-lf972" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0", GenerateName:"calico-kube-controllers-96d7d7cd5-", Namespace:"calico-system", SelfLink:"", UID:"23a466ce-0281-447f-9cb4-09d9798f1f80", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 1, 14, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"96d7d7cd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-58e4f3488e.novalocal", ContainerID:"", Pod:"calico-kube-controllers-96d7d7cd5-lf972", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd4d534212a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 01:14:49.191086 containerd[1478]: 2025-05-09 01:14:49.040 [INFO][3933] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.63.193/32] ContainerID="0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" Namespace="calico-system" Pod="calico-kube-controllers-96d7d7cd5-lf972" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0" May 9 01:14:49.191086 containerd[1478]: 2025-05-09 01:14:49.040 [INFO][3933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd4d534212a ContainerID="0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" Namespace="calico-system" Pod="calico-kube-controllers-96d7d7cd5-lf972" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0" May 9 01:14:49.191086 containerd[1478]: 2025-05-09 01:14:49.129 [INFO][3933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" Namespace="calico-system" Pod="calico-kube-controllers-96d7d7cd5-lf972" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0" May 9 01:14:49.191282 containerd[1478]: 2025-05-09 01:14:49.130 [INFO][3933] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" Namespace="calico-system" Pod="calico-kube-controllers-96d7d7cd5-lf972" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0", GenerateName:"calico-kube-controllers-96d7d7cd5-", Namespace:"calico-system", SelfLink:"", UID:"23a466ce-0281-447f-9cb4-09d9798f1f80", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 1, 14, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"96d7d7cd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-58e4f3488e.novalocal", ContainerID:"0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8", Pod:"calico-kube-controllers-96d7d7cd5-lf972", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidd4d534212a", MAC:"66:5b:cb:9a:f0:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 01:14:49.191352 containerd[1478]: 2025-05-09 01:14:49.186 [INFO][3933] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8" Namespace="calico-system" Pod="calico-kube-controllers-96d7d7cd5-lf972" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-calico--kube--controllers--96d7d7cd5--lf972-eth0" May 9 01:14:49.196445 containerd[1478]: time="2025-05-09T01:14:49.195887883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"d70104bf382b363fe16004cca12e613e0d20812512f20cb2e4f95340409f7fd1\" pid:3995 exit_status:1 exited_at:{seconds:1746753289 nanos:195156857}" May 9 01:14:49.208711 containerd[1478]: 2025-05-09 01:14:48.820 [INFO][3943] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 9 01:14:49.208711 containerd[1478]: 2025-05-09 01:14:48.884 [INFO][3943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0 coredns-7db6d8ff4d- kube-system f9d20423-e293-4efa-8bcc-bf0ab9f333b5 685 0 2025-05-09 01:14:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284-0-0-n-58e4f3488e.novalocal coredns-7db6d8ff4d-9p94z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic03e6c06174 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9p94z" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-" May 9 01:14:49.208711 containerd[1478]: 2025-05-09 01:14:48.886 [INFO][3943] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9p94z" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0" May 9 01:14:49.208711 containerd[1478]: 2025-05-09 01:14:48.972 [INFO][3964] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" HandleID="k8s-pod-network.6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" Workload="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0" May 9 01:14:49.209219 containerd[1478]: 2025-05-09 01:14:48.993 [INFO][3964] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" HandleID="k8s-pod-network.6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" Workload="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b6f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284-0-0-n-58e4f3488e.novalocal", "pod":"coredns-7db6d8ff4d-9p94z", "timestamp":"2025-05-09 01:14:48.972872715 +0000 UTC"}, Hostname:"ci-4284-0-0-n-58e4f3488e.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 01:14:49.209219 containerd[1478]: 2025-05-09 01:14:48.993 [INFO][3964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 01:14:49.209219 containerd[1478]: 2025-05-09 01:14:49.037 [INFO][3964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 01:14:49.209219 containerd[1478]: 2025-05-09 01:14:49.037 [INFO][3964] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-58e4f3488e.novalocal' May 9 01:14:49.209219 containerd[1478]: 2025-05-09 01:14:49.043 [INFO][3964] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.209219 containerd[1478]: 2025-05-09 01:14:49.050 [INFO][3964] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.209219 containerd[1478]: 2025-05-09 01:14:49.058 [INFO][3964] ipam/ipam.go 489: Trying affinity for 192.168.63.192/26 host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.209219 containerd[1478]: 2025-05-09 01:14:49.060 [INFO][3964] ipam/ipam.go 155: Attempting to load block cidr=192.168.63.192/26 host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.209219 containerd[1478]: 2025-05-09 01:14:49.068 [INFO][3964] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.192/26 host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.209881 containerd[1478]: 2025-05-09 01:14:49.068 [INFO][3964] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.192/26 handle="k8s-pod-network.6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.209881 containerd[1478]: 2025-05-09 01:14:49.073 [INFO][3964] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444 May 9 01:14:49.209881 containerd[1478]: 2025-05-09 01:14:49.093 [INFO][3964] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.63.192/26 handle="k8s-pod-network.6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.209881 containerd[1478]: 2025-05-09 01:14:49.125 [INFO][3964] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.63.194/26] block=192.168.63.192/26 handle="k8s-pod-network.6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.209881 containerd[1478]: 2025-05-09 01:14:49.125 [INFO][3964] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.194/26] handle="k8s-pod-network.6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" host="ci-4284-0-0-n-58e4f3488e.novalocal" May 9 01:14:49.209881 containerd[1478]: 2025-05-09 01:14:49.125 [INFO][3964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 01:14:49.209881 containerd[1478]: 2025-05-09 01:14:49.125 [INFO][3964] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.194/26] IPv6=[] ContainerID="6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" HandleID="k8s-pod-network.6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" Workload="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0" May 9 01:14:49.210223 containerd[1478]: 2025-05-09 01:14:49.128 [INFO][3943] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9p94z" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f9d20423-e293-4efa-8bcc-bf0ab9f333b5", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 1, 14, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-58e4f3488e.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-9p94z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic03e6c06174", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 01:14:49.210223 containerd[1478]: 2025-05-09 01:14:49.128 [INFO][3943] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.63.194/32] ContainerID="6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9p94z" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0" May 9 01:14:49.210223 containerd[1478]: 2025-05-09 01:14:49.129 [INFO][3943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic03e6c06174 ContainerID="6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9p94z" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0" May 9 01:14:49.210223 containerd[1478]: 2025-05-09 01:14:49.133 [INFO][3943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9p94z" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0" May 9 01:14:49.210223 containerd[1478]: 2025-05-09 01:14:49.133 [INFO][3943] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9p94z" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f9d20423-e293-4efa-8bcc-bf0ab9f333b5", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 1, 14, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-58e4f3488e.novalocal", ContainerID:"6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444", Pod:"coredns-7db6d8ff4d-9p94z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic03e6c06174", MAC:"42:95:fe:23:4e:21", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 01:14:49.210223 containerd[1478]: 2025-05-09 01:14:49.204 [INFO][3943] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9p94z" WorkloadEndpoint="ci--4284--0--0--n--58e4f3488e.novalocal-k8s-coredns--7db6d8ff4d--9p94z-eth0" May 9 01:14:49.415154 systemd-networkd[1396]: vxlan.calico: Link UP May 9 01:14:49.415165 systemd-networkd[1396]: vxlan.calico: Gained carrier May 9 01:14:49.754364 containerd[1478]: time="2025-05-09T01:14:49.754289220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-98b7b8ffb-59pkm,Uid:1ac9be78-6b39-45f7-8e12-a5b27024ca27,Namespace:calico-apiserver,Attempt:0,}" May 9 01:14:50.297293 systemd-networkd[1396]: calic03e6c06174: Gained IPv6LL May 9 01:14:50.299046 systemd-networkd[1396]: calidd4d534212a: Gained IPv6LL May 9 01:14:50.756082 containerd[1478]: time="2025-05-09T01:14:50.755228612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zgbc8,Uid:4d010afc-8605-44c1-9991-fd6272876d69,Namespace:calico-system,Attempt:0,}" May 9 01:14:50.757746 containerd[1478]: time="2025-05-09T01:14:50.757337564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jbv4v,Uid:f44564ce-4f13-4505-b7e9-09ffdcbb2bc0,Namespace:kube-system,Attempt:0,}" May 9 01:14:51.321335 systemd-networkd[1396]: vxlan.calico: Gained IPv6LL May 9 01:14:51.755689 containerd[1478]: time="2025-05-09T01:14:51.755527199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-98b7b8ffb-8f46k,Uid:ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638,Namespace:calico-apiserver,Attempt:0,}" May 9 01:15:08.784292 containerd[1478]: time="2025-05-09T01:15:08.784237552Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"ed304db804ebd3c1950b310994fedd95075a96dbc4d6e9f230c1865e0ea12de2\" pid:4137 exited_at:{seconds:1746753308 nanos:782903042}" May 9 01:15:14.753141 kubelet[2813]: E0509 01:15:14.753038 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:14.854216 kubelet[2813]: E0509 01:15:14.854067 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:15.054346 kubelet[2813]: E0509 01:15:15.054195 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:15.455116 kubelet[2813]: E0509 01:15:15.455034 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:16.255701 kubelet[2813]: E0509 01:15:16.255643 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:17.856033 kubelet[2813]: E0509 01:15:17.855990 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:20.458881 kubelet[2813]: I0509 01:15:20.458763 2813 setters.go:580] "Node became not ready" node="ci-4284-0-0-n-58e4f3488e.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T01:15:20Z","lastTransitionTime":"2025-05-09T01:15:20Z","reason":"KubeletNotReady","message":"container runtime is down"} May 9 01:15:21.056756 kubelet[2813]: E0509 01:15:21.056564 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:26.057376 kubelet[2813]: E0509 01:15:26.057301 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:31.058621 kubelet[2813]: E0509 01:15:31.058460 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:36.059375 kubelet[2813]: E0509 01:15:36.059285 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:38.796182 containerd[1478]: time="2025-05-09T01:15:38.795899718Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"0f4cfe41d4d2c934be4076f386bd8e75aaffbf2c42f1455abea9ebb396ffea8c\" pid:4182 exited_at:{seconds:1746753338 nanos:795540213}" May 9 01:15:41.060131 kubelet[2813]: E0509 01:15:41.060029 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:46.061255 kubelet[2813]: E0509 01:15:46.061101 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:51.061865 kubelet[2813]: E0509 01:15:51.061725 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:15:56.064613 kubelet[2813]: E0509 01:15:56.063010 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:01.063697 kubelet[2813]: E0509 01:16:01.063605 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:06.063851 kubelet[2813]: E0509 01:16:06.063777 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:08.810365 containerd[1478]: time="2025-05-09T01:16:08.810153811Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"900f11e308cb2cf64af96d6d22be8d846bd84c8c82df19a5ef173714e284966c\" pid:4211 exited_at:{seconds:1746753368 nanos:809484093}" May 9 01:16:11.064192 kubelet[2813]: E0509 01:16:11.064113 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:16.064632 kubelet[2813]: E0509 01:16:16.064526 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:21.065061 kubelet[2813]: E0509 01:16:21.064944 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:26.066003 kubelet[2813]: E0509 01:16:26.065869 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:31.066779 kubelet[2813]: E0509 01:16:31.066676 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:36.067305 kubelet[2813]: E0509 01:16:36.067205 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:38.813329 containerd[1478]: time="2025-05-09T01:16:38.813150728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"580ebfe85d7026fb18673944d05a0aebe099a7929f42e65c4a3e90559910755f\" pid:4257 exited_at:{seconds:1746753398 nanos:812196156}" May 9 01:16:41.067485 kubelet[2813]: E0509 01:16:41.067419 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:46.068078 kubelet[2813]: E0509 01:16:46.068005 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:48.839755 kubelet[2813]: E0509 01:16:48.839678 2813 remote_runtime.go:633] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:16:48.840227 kubelet[2813]: E0509 01:16:48.839774 2813 kubelet.go:2885] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:16:51.068650 kubelet[2813]: E0509 01:16:51.068525 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:16:56.069512 kubelet[2813]: E0509 01:16:56.069298 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:01.070850 kubelet[2813]: E0509 01:17:01.070149 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:06.071970 kubelet[2813]: E0509 01:17:06.071502 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:08.820236 containerd[1478]: time="2025-05-09T01:17:08.819964317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"58959647b12493ac8a57726044b5d7f1bfa3ca3f1e8553d1d9b488364e43f16e\" pid:4286 exited_at:{seconds:1746753428 nanos:818838307}" May 9 01:17:11.071911 kubelet[2813]: E0509 01:17:11.071809 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:16.072183 kubelet[2813]: E0509 01:17:16.072021 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:21.076126 kubelet[2813]: E0509 01:17:21.074847 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:26.075602 kubelet[2813]: E0509 01:17:26.075322 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:31.080104 kubelet[2813]: E0509 01:17:31.077406 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:33.561036 update_engine[1461]: I20250509 01:17:33.559224 1461 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 9 01:17:33.561036 update_engine[1461]: I20250509 01:17:33.559670 1461 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 9 01:17:33.565309 update_engine[1461]: I20250509 01:17:33.563771 1461 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 9 01:17:33.572511 update_engine[1461]: I20250509 01:17:33.570444 1461 omaha_request_params.cc:62] Current group set to alpha May 9 01:17:33.572511 update_engine[1461]: I20250509 01:17:33.571547 1461 update_attempter.cc:499] Already updated boot flags. Skipping. May 9 01:17:33.572511 update_engine[1461]: I20250509 01:17:33.571580 1461 update_attempter.cc:643] Scheduling an action processor start. May 9 01:17:33.572511 update_engine[1461]: I20250509 01:17:33.571664 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 9 01:17:33.572511 update_engine[1461]: I20250509 01:17:33.571939 1461 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 9 01:17:33.573604 update_engine[1461]: I20250509 01:17:33.573549 1461 omaha_request_action.cc:271] Posting an Omaha request to disabled May 9 01:17:33.574205 update_engine[1461]: I20250509 01:17:33.573931 1461 omaha_request_action.cc:272] Request: May 9 01:17:33.574205 update_engine[1461]: May 9 01:17:33.574205 update_engine[1461]: May 9 01:17:33.574205 update_engine[1461]: May 9 01:17:33.574205 update_engine[1461]: May 9 01:17:33.574205 update_engine[1461]: May 9 01:17:33.574205 update_engine[1461]: May 9 01:17:33.574205 update_engine[1461]: May 9 01:17:33.574205 update_engine[1461]: May 9 01:17:33.577203 update_engine[1461]: I20250509 01:17:33.575160 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 01:17:33.584224 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 9 01:17:33.590454 update_engine[1461]: I20250509 01:17:33.590358 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 01:17:33.591849 update_engine[1461]: I20250509 01:17:33.591715 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 01:17:33.599718 update_engine[1461]: E20250509 01:17:33.599613 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 01:17:33.599934 update_engine[1461]: I20250509 01:17:33.599855 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 9 01:17:36.080902 kubelet[2813]: E0509 01:17:36.080731 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:38.840355 containerd[1478]: time="2025-05-09T01:17:38.839493402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"d8c34722719395ba86080bbd146e5c96117dc0176907bfcfdf39b42b7347c491\" pid:4320 exited_at:{seconds:1746753458 nanos:834679618}" May 9 01:17:41.081490 kubelet[2813]: E0509 01:17:41.081398 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:43.470370 update_engine[1461]: I20250509 01:17:43.470052 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 01:17:43.470848 update_engine[1461]: I20250509 01:17:43.470573 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 01:17:43.470907 update_engine[1461]: I20250509 01:17:43.470878 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 01:17:43.476572 update_engine[1461]: E20250509 01:17:43.476517 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 01:17:43.476705 update_engine[1461]: I20250509 01:17:43.476609 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 9 01:17:46.084874 kubelet[2813]: E0509 01:17:46.083988 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:48.509385 systemd[1]: Started sshd@9-172.24.4.244:22-172.24.4.1:46994.service - OpenSSH per-connection server daemon (172.24.4.1:46994). May 9 01:17:49.906787 sshd[4338]: Accepted publickey for core from 172.24.4.1 port 46994 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:17:49.913564 sshd-session[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:17:49.945295 systemd-logind[1455]: New session 12 of user core. May 9 01:17:49.957853 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 01:17:50.655059 sshd[4340]: Connection closed by 172.24.4.1 port 46994 May 9 01:17:50.656419 sshd-session[4338]: pam_unix(sshd:session): session closed for user core May 9 01:17:50.660602 systemd[1]: sshd@9-172.24.4.244:22-172.24.4.1:46994.service: Deactivated successfully. May 9 01:17:50.663901 systemd[1]: session-12.scope: Deactivated successfully. May 9 01:17:50.666191 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. May 9 01:17:50.667961 systemd-logind[1455]: Removed session 12. May 9 01:17:51.085564 kubelet[2813]: E0509 01:17:51.085393 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:53.466025 update_engine[1461]: I20250509 01:17:53.464339 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 01:17:53.468366 update_engine[1461]: I20250509 01:17:53.467638 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 01:17:53.469889 update_engine[1461]: I20250509 01:17:53.469789 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 01:17:53.475882 update_engine[1461]: E20250509 01:17:53.475531 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 01:17:53.475882 update_engine[1461]: I20250509 01:17:53.475812 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 9 01:17:55.716130 systemd[1]: Started sshd@10-172.24.4.244:22-172.24.4.1:45932.service - OpenSSH per-connection server daemon (172.24.4.1:45932). May 9 01:17:56.086621 kubelet[2813]: E0509 01:17:56.086101 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:17:56.994652 sshd[4359]: Accepted publickey for core from 172.24.4.1 port 45932 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:17:56.998808 sshd-session[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:17:57.029565 systemd-logind[1455]: New session 13 of user core. May 9 01:17:57.036368 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 01:17:57.958091 sshd[4364]: Connection closed by 172.24.4.1 port 45932 May 9 01:17:57.957571 sshd-session[4359]: pam_unix(sshd:session): session closed for user core May 9 01:17:57.975458 systemd[1]: sshd@10-172.24.4.244:22-172.24.4.1:45932.service: Deactivated successfully. May 9 01:17:57.987253 systemd[1]: session-13.scope: Deactivated successfully. May 9 01:17:57.990827 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. May 9 01:17:57.995329 systemd-logind[1455]: Removed session 13. May 9 01:18:01.087083 kubelet[2813]: E0509 01:18:01.086913 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:18:02.989403 systemd[1]: Started sshd@11-172.24.4.244:22-172.24.4.1:45938.service - OpenSSH per-connection server daemon (172.24.4.1:45938). May 9 01:18:03.466592 update_engine[1461]: I20250509 01:18:03.466229 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 01:18:03.467939 update_engine[1461]: I20250509 01:18:03.467664 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 01:18:03.468937 update_engine[1461]: I20250509 01:18:03.468849 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 01:18:03.476111 update_engine[1461]: E20250509 01:18:03.474475 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 01:18:03.476111 update_engine[1461]: I20250509 01:18:03.474726 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 9 01:18:03.476111 update_engine[1461]: I20250509 01:18:03.474776 1461 omaha_request_action.cc:617] Omaha request response: May 9 01:18:03.476111 update_engine[1461]: E20250509 01:18:03.475239 1461 omaha_request_action.cc:636] Omaha request network transfer failed. May 9 01:18:03.476111 update_engine[1461]: I20250509 01:18:03.475876 1461 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 9 01:18:03.476111 update_engine[1461]: I20250509 01:18:03.475921 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 9 01:18:03.476853 update_engine[1461]: I20250509 01:18:03.476125 1461 update_attempter.cc:306] Processing Done. May 9 01:18:03.476853 update_engine[1461]: E20250509 01:18:03.476329 1461 update_attempter.cc:619] Update failed. May 9 01:18:03.476853 update_engine[1461]: I20250509 01:18:03.476379 1461 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 9 01:18:03.476853 update_engine[1461]: I20250509 01:18:03.476420 1461 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 9 01:18:03.476853 update_engine[1461]: I20250509 01:18:03.476435 1461 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 9 01:18:03.477417 update_engine[1461]: I20250509 01:18:03.477236 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 9 01:18:03.477880 update_engine[1461]: I20250509 01:18:03.477417 1461 omaha_request_action.cc:271] Posting an Omaha request to disabled May 9 01:18:03.477880 update_engine[1461]: I20250509 01:18:03.477443 1461 omaha_request_action.cc:272] Request: May 9 01:18:03.477880 update_engine[1461]: May 9 01:18:03.477880 update_engine[1461]: May 9 01:18:03.477880 update_engine[1461]: May 9 01:18:03.477880 update_engine[1461]: May 9 01:18:03.477880 update_engine[1461]: May 9 01:18:03.477880 update_engine[1461]: May 9 01:18:03.477880 update_engine[1461]: I20250509 01:18:03.477459 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 01:18:03.477880 update_engine[1461]: I20250509 01:18:03.477825 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 01:18:03.479072 update_engine[1461]: I20250509 01:18:03.478829 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 01:18:03.482898 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 9 01:18:03.483952 update_engine[1461]: E20250509 01:18:03.483893 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 01:18:03.484274 update_engine[1461]: I20250509 01:18:03.484073 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 9 01:18:03.484274 update_engine[1461]: I20250509 01:18:03.484234 1461 omaha_request_action.cc:617] Omaha request response: May 9 01:18:03.484274 update_engine[1461]: I20250509 01:18:03.484257 1461 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 9 01:18:03.484604 update_engine[1461]: I20250509 01:18:03.484270 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 9 01:18:03.484604 update_engine[1461]: I20250509 01:18:03.484352 1461 update_attempter.cc:306] Processing Done. May 9 01:18:03.484604 update_engine[1461]: I20250509 01:18:03.484373 1461 update_attempter.cc:310] Error event sent. May 9 01:18:03.484604 update_engine[1461]: I20250509 01:18:03.484421 1461 update_check_scheduler.cc:74] Next update check in 45m16s May 9 01:18:03.485278 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 9 01:18:04.572180 sshd[4379]: Accepted publickey for core from 172.24.4.1 port 45938 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:18:04.577294 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:18:04.594887 systemd-logind[1455]: New session 14 of user core. May 9 01:18:04.602360 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 01:18:05.489108 sshd[4388]: Connection closed by 172.24.4.1 port 45938 May 9 01:18:05.490714 sshd-session[4379]: pam_unix(sshd:session): session closed for user core May 9 01:18:05.500797 systemd[1]: sshd@11-172.24.4.244:22-172.24.4.1:45938.service: Deactivated successfully. May 9 01:18:05.511321 systemd[1]: session-14.scope: Deactivated successfully. May 9 01:18:05.514427 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. May 9 01:18:05.517256 systemd-logind[1455]: Removed session 14. May 9 01:18:06.087970 kubelet[2813]: E0509 01:18:06.087757 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:18:08.836370 containerd[1478]: time="2025-05-09T01:18:08.836165423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"c9162a56ca3eaf830b31dcdbf82c33326a4c1bd884e57b270cdddd25c2316304\" pid:4412 exited_at:{seconds:1746753488 nanos:835267144}" May 9 01:18:10.509055 systemd[1]: Started sshd@12-172.24.4.244:22-172.24.4.1:59828.service - OpenSSH per-connection server daemon (172.24.4.1:59828). May 9 01:18:11.088226 kubelet[2813]: E0509 01:18:11.088107 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:18:11.811304 sshd[4425]: Accepted publickey for core from 172.24.4.1 port 59828 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:18:11.815669 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:18:11.829789 systemd-logind[1455]: New session 15 of user core. May 9 01:18:11.836423 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 01:18:12.700247 sshd[4427]: Connection closed by 172.24.4.1 port 59828 May 9 01:18:12.702262 sshd-session[4425]: pam_unix(sshd:session): session closed for user core May 9 01:18:12.720471 systemd[1]: sshd@12-172.24.4.244:22-172.24.4.1:59828.service: Deactivated successfully. May 9 01:18:12.728852 systemd[1]: session-15.scope: Deactivated successfully. May 9 01:18:12.731408 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. May 9 01:18:12.734967 systemd-logind[1455]: Removed session 15. May 9 01:18:16.088623 kubelet[2813]: E0509 01:18:16.088493 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:18:17.730342 systemd[1]: Started sshd@13-172.24.4.244:22-172.24.4.1:42676.service - OpenSSH per-connection server daemon (172.24.4.1:42676). May 9 01:18:18.966392 sshd[4446]: Accepted publickey for core from 172.24.4.1 port 42676 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:18:18.979906 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:18:19.023597 systemd-logind[1455]: New session 16 of user core. May 9 01:18:19.050342 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 01:18:19.853220 sshd[4448]: Connection closed by 172.24.4.1 port 42676 May 9 01:18:19.854780 sshd-session[4446]: pam_unix(sshd:session): session closed for user core May 9 01:18:19.867126 systemd[1]: sshd@13-172.24.4.244:22-172.24.4.1:42676.service: Deactivated successfully. May 9 01:18:19.876149 systemd[1]: session-16.scope: Deactivated successfully. May 9 01:18:19.878512 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. May 9 01:18:19.881326 systemd-logind[1455]: Removed session 16. May 9 01:18:21.090121 kubelet[2813]: E0509 01:18:21.089718 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:18:24.887853 systemd[1]: Started sshd@14-172.24.4.244:22-172.24.4.1:48468.service - OpenSSH per-connection server daemon (172.24.4.1:48468). May 9 01:18:26.091269 kubelet[2813]: E0509 01:18:26.091041 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:18:26.278474 sshd[4460]: Accepted publickey for core from 172.24.4.1 port 48468 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:18:26.280478 sshd-session[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:18:26.293694 systemd-logind[1455]: New session 17 of user core. May 9 01:18:26.297215 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 01:18:26.889014 sshd[4462]: Connection closed by 172.24.4.1 port 48468 May 9 01:18:26.890211 sshd-session[4460]: pam_unix(sshd:session): session closed for user core May 9 01:18:26.900094 systemd[1]: sshd@14-172.24.4.244:22-172.24.4.1:48468.service: Deactivated successfully. May 9 01:18:26.903353 systemd[1]: session-17.scope: Deactivated successfully. May 9 01:18:26.906524 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. May 9 01:18:26.909109 systemd-logind[1455]: Removed session 17. May 9 01:18:31.093642 kubelet[2813]: E0509 01:18:31.092887 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:18:31.931618 systemd[1]: Started sshd@15-172.24.4.244:22-172.24.4.1:48472.service - OpenSSH per-connection server daemon (172.24.4.1:48472). May 9 01:18:33.053114 sshd[4475]: Accepted publickey for core from 172.24.4.1 port 48472 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:18:33.057390 sshd-session[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:18:33.080586 systemd-logind[1455]: New session 18 of user core. May 9 01:18:33.092373 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 01:18:33.903584 sshd[4477]: Connection closed by 172.24.4.1 port 48472 May 9 01:18:33.905163 sshd-session[4475]: pam_unix(sshd:session): session closed for user core May 9 01:18:33.917105 systemd[1]: sshd@15-172.24.4.244:22-172.24.4.1:48472.service: Deactivated successfully. May 9 01:18:33.934190 systemd[1]: session-18.scope: Deactivated successfully. May 9 01:18:33.944625 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. May 9 01:18:33.946736 systemd-logind[1455]: Removed session 18. May 9 01:18:36.094101 kubelet[2813]: E0509 01:18:36.094047 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:18:38.842793 containerd[1478]: time="2025-05-09T01:18:38.842569144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"0f9c5d041b9df3a57207f868a707381b2fefb3f728f3a30068dad72ee6e5eca0\" pid:4503 exited_at:{seconds:1746753518 nanos:840869941}" May 9 01:18:38.925559 systemd[1]: Started sshd@16-172.24.4.244:22-172.24.4.1:40094.service - OpenSSH per-connection server daemon (172.24.4.1:40094). May 9 01:18:40.110028 sshd[4517]: Accepted publickey for core from 172.24.4.1 port 40094 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:18:40.114478 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:18:40.131468 systemd-logind[1455]: New session 19 of user core. May 9 01:18:40.140323 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 01:18:40.901049 sshd[4519]: Connection closed by 172.24.4.1 port 40094 May 9 01:18:40.900427 sshd-session[4517]: pam_unix(sshd:session): session closed for user core May 9 01:18:40.911515 systemd[1]: sshd@16-172.24.4.244:22-172.24.4.1:40094.service: Deactivated successfully. May 9 01:18:40.917106 systemd[1]: session-19.scope: Deactivated successfully. May 9 01:18:40.923890 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. May 9 01:18:40.926880 systemd-logind[1455]: Removed session 19. May 9 01:18:41.094375 kubelet[2813]: E0509 01:18:41.094235 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:18:45.930219 systemd[1]: Started sshd@17-172.24.4.244:22-172.24.4.1:34574.service - OpenSSH per-connection server daemon (172.24.4.1:34574). May 9 01:18:46.094764 kubelet[2813]: E0509 01:18:46.094672 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:18:47.282403 sshd[4534]: Accepted publickey for core from 172.24.4.1 port 34574 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:18:47.285060 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:18:47.295342 systemd-logind[1455]: New session 20 of user core. May 9 01:18:47.302329 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 01:18:48.169658 sshd[4536]: Connection closed by 172.24.4.1 port 34574 May 9 01:18:48.176180 sshd-session[4534]: pam_unix(sshd:session): session closed for user core May 9 01:18:48.191844 systemd[1]: sshd@17-172.24.4.244:22-172.24.4.1:34574.service: Deactivated successfully. May 9 01:18:48.206884 systemd[1]: session-20.scope: Deactivated successfully. May 9 01:18:48.211175 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. May 9 01:18:48.214986 systemd-logind[1455]: Removed session 20. May 9 01:18:48.764623 kubelet[2813]: E0509 01:18:48.763568 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:18:48.769132 kubelet[2813]: E0509 01:18:48.765845 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:18:48.769132 kubelet[2813]: E0509 01:18:48.766328 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7db6d8ff4d-9p94z" May 9 01:18:48.769132 kubelet[2813]: E0509 01:18:48.766581 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7db6d8ff4d-9p94z" May 9 01:18:48.769132 kubelet[2813]: E0509 01:18:48.766763 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/calico-kube-controllers-96d7d7cd5-lf972" May 9 01:18:48.769132 kubelet[2813]: E0509 01:18:48.766902 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/calico-kube-controllers-96d7d7cd5-lf972" May 9 01:18:48.769132 kubelet[2813]: E0509 01:18:48.767294 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9p94z_kube-system(f9d20423-e293-4efa-8bcc-bf0ab9f333b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9p94z_kube-system(f9d20423-e293-4efa-8bcc-bf0ab9f333b5)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-7db6d8ff4d-9p94z" podUID="f9d20423-e293-4efa-8bcc-bf0ab9f333b5" May 9 01:18:48.770471 kubelet[2813]: E0509 01:18:48.767339 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-96d7d7cd5-lf972_calico-system(23a466ce-0281-447f-9cb4-09d9798f1f80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-96d7d7cd5-lf972_calico-system(23a466ce-0281-447f-9cb4-09d9798f1f80)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-system/calico-kube-controllers-96d7d7cd5-lf972" podUID="23a466ce-0281-447f-9cb4-09d9798f1f80" May 9 01:18:49.076036 containerd[1478]: time="2025-05-09T01:18:49.074129624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96d7d7cd5-lf972,Uid:23a466ce-0281-447f-9cb4-09d9798f1f80,Namespace:calico-system,Attempt:0,}" May 9 01:18:49.076036 containerd[1478]: time="2025-05-09T01:18:49.075653787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9p94z,Uid:f9d20423-e293-4efa-8bcc-bf0ab9f333b5,Namespace:kube-system,Attempt:0,}" May 9 01:18:49.076825 containerd[1478]: time="2025-05-09T01:18:49.076278551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9p94z,Uid:f9d20423-e293-4efa-8bcc-bf0ab9f333b5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-7db6d8ff4d-9p94z_kube-system_f9d20423-e293-4efa-8bcc-bf0ab9f333b5_0\": name \"coredns-7db6d8ff4d-9p94z_kube-system_f9d20423-e293-4efa-8bcc-bf0ab9f333b5_0\" is reserved for \"6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444\"" May 9 01:18:49.081012 kubelet[2813]: E0509 01:18:49.079296 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-7db6d8ff4d-9p94z_kube-system_f9d20423-e293-4efa-8bcc-bf0ab9f333b5_0\": name \"coredns-7db6d8ff4d-9p94z_kube-system_f9d20423-e293-4efa-8bcc-bf0ab9f333b5_0\" is reserved for \"6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444\"" May 9 01:18:49.081012 kubelet[2813]: E0509 01:18:49.079361 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-7db6d8ff4d-9p94z_kube-system_f9d20423-e293-4efa-8bcc-bf0ab9f333b5_0\": name \"coredns-7db6d8ff4d-9p94z_kube-system_f9d20423-e293-4efa-8bcc-bf0ab9f333b5_0\" is reserved for \"6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444\"" pod="kube-system/coredns-7db6d8ff4d-9p94z" May 9 01:18:49.081012 kubelet[2813]: E0509 01:18:49.079394 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-7db6d8ff4d-9p94z_kube-system_f9d20423-e293-4efa-8bcc-bf0ab9f333b5_0\": name \"coredns-7db6d8ff4d-9p94z_kube-system_f9d20423-e293-4efa-8bcc-bf0ab9f333b5_0\" is reserved for \"6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444\"" pod="kube-system/coredns-7db6d8ff4d-9p94z" May 9 01:18:49.081622 containerd[1478]: time="2025-05-09T01:18:49.079036073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-96d7d7cd5-lf972,Uid:23a466ce-0281-447f-9cb4-09d9798f1f80,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-kube-controllers-96d7d7cd5-lf972_calico-system_23a466ce-0281-447f-9cb4-09d9798f1f80_0\": name \"calico-kube-controllers-96d7d7cd5-lf972_calico-system_23a466ce-0281-447f-9cb4-09d9798f1f80_0\" is reserved for \"0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8\"" May 9 01:18:49.081714 kubelet[2813]: E0509 01:18:49.079452 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9p94z_kube-system(f9d20423-e293-4efa-8bcc-bf0ab9f333b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9p94z_kube-system(f9d20423-e293-4efa-8bcc-bf0ab9f333b5)\\\": rpc error: code = Unknown desc = failed to reserve sandbox name \\\"coredns-7db6d8ff4d-9p94z_kube-system_f9d20423-e293-4efa-8bcc-bf0ab9f333b5_0\\\": name \\\"coredns-7db6d8ff4d-9p94z_kube-system_f9d20423-e293-4efa-8bcc-bf0ab9f333b5_0\\\" is reserved for \\\"6a35a15a55b98dc8b8270b8f0904088f00ff0032bdaa723dd40caae7ef796444\\\"\"" pod="kube-system/coredns-7db6d8ff4d-9p94z" podUID="f9d20423-e293-4efa-8bcc-bf0ab9f333b5" May 9 01:18:49.081714 kubelet[2813]: E0509 01:18:49.080110 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-kube-controllers-96d7d7cd5-lf972_calico-system_23a466ce-0281-447f-9cb4-09d9798f1f80_0\": name \"calico-kube-controllers-96d7d7cd5-lf972_calico-system_23a466ce-0281-447f-9cb4-09d9798f1f80_0\" is reserved for \"0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8\"" May 9 01:18:49.081714 kubelet[2813]: E0509 01:18:49.080142 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-kube-controllers-96d7d7cd5-lf972_calico-system_23a466ce-0281-447f-9cb4-09d9798f1f80_0\": name \"calico-kube-controllers-96d7d7cd5-lf972_calico-system_23a466ce-0281-447f-9cb4-09d9798f1f80_0\" is reserved for \"0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8\"" pod="calico-system/calico-kube-controllers-96d7d7cd5-lf972" May 9 01:18:49.081920 kubelet[2813]: E0509 01:18:49.080230 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-kube-controllers-96d7d7cd5-lf972_calico-system_23a466ce-0281-447f-9cb4-09d9798f1f80_0\": name \"calico-kube-controllers-96d7d7cd5-lf972_calico-system_23a466ce-0281-447f-9cb4-09d9798f1f80_0\" is reserved for \"0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8\"" pod="calico-system/calico-kube-controllers-96d7d7cd5-lf972" May 9 01:18:49.081920 kubelet[2813]: E0509 01:18:49.080319 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-96d7d7cd5-lf972_calico-system(23a466ce-0281-447f-9cb4-09d9798f1f80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-96d7d7cd5-lf972_calico-system(23a466ce-0281-447f-9cb4-09d9798f1f80)\\\": rpc error: code = Unknown desc = failed to reserve sandbox name \\\"calico-kube-controllers-96d7d7cd5-lf972_calico-system_23a466ce-0281-447f-9cb4-09d9798f1f80_0\\\": name \\\"calico-kube-controllers-96d7d7cd5-lf972_calico-system_23a466ce-0281-447f-9cb4-09d9798f1f80_0\\\" is reserved for \\\"0da6bde8d715b5adb271f94a55d08d6ea36c3bd3d5e6fa02de0622901f2332c8\\\"\"" pod="calico-system/calico-kube-controllers-96d7d7cd5-lf972" podUID="23a466ce-0281-447f-9cb4-09d9798f1f80" May 9 01:18:49.755177 kubelet[2813]: E0509 01:18:49.754539 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:18:49.755177 kubelet[2813]: E0509 01:18:49.755105 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-98b7b8ffb-59pkm" May 9 01:18:49.755683 kubelet[2813]: E0509 01:18:49.755246 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-98b7b8ffb-59pkm" May 9 01:18:49.757183 kubelet[2813]: E0509 01:18:49.756175 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-98b7b8ffb-59pkm_calico-apiserver(1ac9be78-6b39-45f7-8e12-a5b27024ca27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-98b7b8ffb-59pkm_calico-apiserver(1ac9be78-6b39-45f7-8e12-a5b27024ca27)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-apiserver/calico-apiserver-98b7b8ffb-59pkm" podUID="1ac9be78-6b39-45f7-8e12-a5b27024ca27" May 9 01:18:50.754969 kubelet[2813]: E0509 01:18:50.754901 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:18:50.754969 kubelet[2813]: E0509 01:18:50.754985 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/csi-node-driver-zgbc8" May 9 01:18:50.755581 kubelet[2813]: E0509 01:18:50.755008 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/csi-node-driver-zgbc8" May 9 01:18:50.755581 kubelet[2813]: E0509 01:18:50.755061 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zgbc8_calico-system(4d010afc-8605-44c1-9991-fd6272876d69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zgbc8_calico-system(4d010afc-8605-44c1-9991-fd6272876d69)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-system/csi-node-driver-zgbc8" podUID="4d010afc-8605-44c1-9991-fd6272876d69" May 9 01:18:50.756800 kubelet[2813]: E0509 01:18:50.756505 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:18:50.756800 kubelet[2813]: E0509 01:18:50.756538 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7db6d8ff4d-jbv4v" May 9 01:18:50.756800 kubelet[2813]: E0509 01:18:50.756554 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7db6d8ff4d-jbv4v" May 9 01:18:50.756800 kubelet[2813]: E0509 01:18:50.756577 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jbv4v_kube-system(f44564ce-4f13-4505-b7e9-09ffdcbb2bc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jbv4v_kube-system(f44564ce-4f13-4505-b7e9-09ffdcbb2bc0)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-7db6d8ff4d-jbv4v" podUID="f44564ce-4f13-4505-b7e9-09ffdcbb2bc0" May 9 01:18:51.095905 kubelet[2813]: E0509 01:18:51.095689 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:18:51.756108 kubelet[2813]: E0509 01:18:51.755944 2813 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:18:51.756108 kubelet[2813]: E0509 01:18:51.756102 2813 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-98b7b8ffb-8f46k" May 9 01:18:51.756108 kubelet[2813]: E0509 01:18:51.756140 2813 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-98b7b8ffb-8f46k" May 9 01:18:51.759120 kubelet[2813]: E0509 01:18:51.756223 2813 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-98b7b8ffb-8f46k_calico-apiserver(ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-98b7b8ffb-8f46k_calico-apiserver(ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-apiserver/calico-apiserver-98b7b8ffb-8f46k" podUID="ffa3bd38-ea0e-49b2-933b-5ba9b1ba7638" May 9 01:18:52.747717 containerd[1478]: time="2025-05-09T01:18:52.747114084Z" level=warning msg="container event discarded" container=e9abf94577906a6811ccc2d50c318e9d57728aa90bddabc826b06a1dba66a0f6 type=CONTAINER_CREATED_EVENT May 9 01:18:52.760208 containerd[1478]: time="2025-05-09T01:18:52.760058473Z" level=warning msg="container event discarded" container=e9abf94577906a6811ccc2d50c318e9d57728aa90bddabc826b06a1dba66a0f6 type=CONTAINER_STARTED_EVENT May 9 01:18:52.760208 containerd[1478]: time="2025-05-09T01:18:52.760162509Z" level=warning msg="container event discarded" container=51c913f5f0583b4f10c48c188e7699c65f048299d2e85b9a1974b1e6f9c14e73 type=CONTAINER_CREATED_EVENT May 9 01:18:52.760208 containerd[1478]: time="2025-05-09T01:18:52.760186414Z" level=warning msg="container event discarded" container=51c913f5f0583b4f10c48c188e7699c65f048299d2e85b9a1974b1e6f9c14e73 type=CONTAINER_STARTED_EVENT May 9 01:18:52.784512 containerd[1478]: time="2025-05-09T01:18:52.784370932Z" level=warning msg="container event discarded" container=7ebf76892767ef67a00f34bc3745db9046e547d451416f2ba35cc18141506428 type=CONTAINER_CREATED_EVENT May 9 01:18:52.784512 containerd[1478]: time="2025-05-09T01:18:52.784457484Z" level=warning msg="container event discarded" container=7ebf76892767ef67a00f34bc3745db9046e547d451416f2ba35cc18141506428 type=CONTAINER_STARTED_EVENT May 9 01:18:52.805032 containerd[1478]: time="2025-05-09T01:18:52.804830040Z" level=warning msg="container event discarded" container=fb922d3b219512ce034fdf1edf71ffd7fb638432f6a9dee9fdede9f290b5599e type=CONTAINER_CREATED_EVENT May 9 01:18:52.827966 containerd[1478]: time="2025-05-09T01:18:52.827791772Z" level=warning msg="container event discarded" container=af7404a3466f142b0ae1e0aefe9a601b8692b7f8a0d0412c02bbfdff25f80f26 type=CONTAINER_CREATED_EVENT May 9 01:18:52.841286 containerd[1478]: time="2025-05-09T01:18:52.841147594Z" level=warning msg="container event discarded" container=dd51ebaa7ac78fdd1d183985c4ec656e367275ad852ce96f3e89677d394c5556 type=CONTAINER_CREATED_EVENT May 9 01:18:52.923921 containerd[1478]: time="2025-05-09T01:18:52.923732184Z" level=warning msg="container event discarded" container=fb922d3b219512ce034fdf1edf71ffd7fb638432f6a9dee9fdede9f290b5599e type=CONTAINER_STARTED_EVENT May 9 01:18:52.969567 containerd[1478]: time="2025-05-09T01:18:52.969448596Z" level=warning msg="container event discarded" container=dd51ebaa7ac78fdd1d183985c4ec656e367275ad852ce96f3e89677d394c5556 type=CONTAINER_STARTED_EVENT May 9 01:18:52.993091 containerd[1478]: time="2025-05-09T01:18:52.992892081Z" level=warning msg="container event discarded" container=af7404a3466f142b0ae1e0aefe9a601b8692b7f8a0d0412c02bbfdff25f80f26 type=CONTAINER_STARTED_EVENT May 9 01:18:53.221302 systemd[1]: Started sshd@18-172.24.4.244:22-172.24.4.1:34582.service - OpenSSH per-connection server daemon (172.24.4.1:34582). May 9 01:18:53.854168 kubelet[2813]: E0509 01:18:53.853896 2813 remote_runtime.go:633] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:18:53.855917 kubelet[2813]: E0509 01:18:53.854203 2813 kubelet.go:2885] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:18:54.299842 sshd[4549]: Accepted publickey for core from 172.24.4.1 port 34582 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:18:54.302591 sshd-session[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:18:54.314783 systemd-logind[1455]: New session 21 of user core. May 9 01:18:54.322247 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 01:18:55.181785 sshd[4551]: Connection closed by 172.24.4.1 port 34582 May 9 01:18:55.184358 sshd-session[4549]: pam_unix(sshd:session): session closed for user core May 9 01:18:55.191360 systemd[1]: sshd@18-172.24.4.244:22-172.24.4.1:34582.service: Deactivated successfully. May 9 01:18:55.195659 systemd[1]: session-21.scope: Deactivated successfully. May 9 01:18:55.198336 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. May 9 01:18:55.199811 systemd-logind[1455]: Removed session 21. May 9 01:18:56.096549 kubelet[2813]: E0509 01:18:56.096445 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:00.223784 systemd[1]: Started sshd@19-172.24.4.244:22-172.24.4.1:53498.service - OpenSSH per-connection server daemon (172.24.4.1:53498). May 9 01:19:01.096798 kubelet[2813]: E0509 01:19:01.096609 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:01.379268 sshd[4566]: Accepted publickey for core from 172.24.4.1 port 53498 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:19:01.383242 sshd-session[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:19:01.397550 systemd-logind[1455]: New session 22 of user core. May 9 01:19:01.409582 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 01:19:02.128815 sshd[4568]: Connection closed by 172.24.4.1 port 53498 May 9 01:19:02.136791 sshd-session[4566]: pam_unix(sshd:session): session closed for user core May 9 01:19:02.157093 systemd[1]: sshd@19-172.24.4.244:22-172.24.4.1:53498.service: Deactivated successfully. May 9 01:19:02.168017 systemd[1]: session-22.scope: Deactivated successfully. May 9 01:19:02.174640 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. May 9 01:19:02.178953 systemd-logind[1455]: Removed session 22. May 9 01:19:06.098589 kubelet[2813]: E0509 01:19:06.098221 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:07.169345 systemd[1]: Started sshd@20-172.24.4.244:22-172.24.4.1:53818.service - OpenSSH per-connection server daemon (172.24.4.1:53818). May 9 01:19:08.283084 sshd[4580]: Accepted publickey for core from 172.24.4.1 port 53818 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:19:08.286647 sshd-session[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:19:08.308623 systemd-logind[1455]: New session 23 of user core. May 9 01:19:08.325534 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 01:19:08.896168 containerd[1478]: time="2025-05-09T01:19:08.896035050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"9c8d61c3366fc1344ec42e06ed5db9b4fa3728c94dc8248eea71515bd27d01cf\" pid:4595 exited_at:{seconds:1746753548 nanos:895154326}" May 9 01:19:09.031779 sshd[4582]: Connection closed by 172.24.4.1 port 53818 May 9 01:19:09.031136 sshd-session[4580]: pam_unix(sshd:session): session closed for user core May 9 01:19:09.038455 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. May 9 01:19:09.038677 systemd[1]: sshd@20-172.24.4.244:22-172.24.4.1:53818.service: Deactivated successfully. May 9 01:19:09.041248 systemd[1]: session-23.scope: Deactivated successfully. May 9 01:19:09.045107 systemd-logind[1455]: Removed session 23. May 9 01:19:11.099235 kubelet[2813]: E0509 01:19:11.099109 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:14.072575 systemd[1]: Started sshd@21-172.24.4.244:22-172.24.4.1:40188.service - OpenSSH per-connection server daemon (172.24.4.1:40188). May 9 01:19:15.178665 sshd[4619]: Accepted publickey for core from 172.24.4.1 port 40188 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:19:15.180484 sshd-session[4619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:19:15.190098 systemd-logind[1455]: New session 24 of user core. May 9 01:19:15.194361 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 01:19:15.673401 containerd[1478]: time="2025-05-09T01:19:15.673190066Z" level=warning msg="container event discarded" container=946b4a710a12eeb781b474d3508a514492f772a40dd6392def17f94b55e6d8e0 type=CONTAINER_CREATED_EVENT May 9 01:19:15.673401 containerd[1478]: time="2025-05-09T01:19:15.673355136Z" level=warning msg="container event discarded" container=946b4a710a12eeb781b474d3508a514492f772a40dd6392def17f94b55e6d8e0 type=CONTAINER_STARTED_EVENT May 9 01:19:15.702814 containerd[1478]: time="2025-05-09T01:19:15.702714417Z" level=warning msg="container event discarded" container=76bcc5731293ed0dde940892a27a77b650ea863bf11c00d5b940adc5dfa9d1c8 type=CONTAINER_CREATED_EVENT May 9 01:19:15.702814 containerd[1478]: time="2025-05-09T01:19:15.702805509Z" level=warning msg="container event discarded" container=76bcc5731293ed0dde940892a27a77b650ea863bf11c00d5b940adc5dfa9d1c8 type=CONTAINER_STARTED_EVENT May 9 01:19:15.737498 containerd[1478]: time="2025-05-09T01:19:15.737332769Z" level=warning msg="container event discarded" container=fc87b22311ae4536c969f3f11906310f0244a4ed7231db467bf3f58ba66d9636 type=CONTAINER_CREATED_EVENT May 9 01:19:15.814180 containerd[1478]: time="2025-05-09T01:19:15.813856178Z" level=warning msg="container event discarded" container=fc87b22311ae4536c969f3f11906310f0244a4ed7231db467bf3f58ba66d9636 type=CONTAINER_STARTED_EVENT May 9 01:19:15.839116 sshd[4621]: Connection closed by 172.24.4.1 port 40188 May 9 01:19:15.841516 sshd-session[4619]: pam_unix(sshd:session): session closed for user core May 9 01:19:15.851578 systemd[1]: sshd@21-172.24.4.244:22-172.24.4.1:40188.service: Deactivated successfully. May 9 01:19:15.858537 systemd[1]: session-24.scope: Deactivated successfully. May 9 01:19:15.862157 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. May 9 01:19:15.865344 systemd-logind[1455]: Removed session 24. May 9 01:19:16.099676 kubelet[2813]: E0509 01:19:16.099396 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:18.201561 containerd[1478]: time="2025-05-09T01:19:18.201454158Z" level=warning msg="container event discarded" container=24563d185a308463b04e82d6a4fdb2c55205d440f98cdaeafea67e4c10b40802 type=CONTAINER_CREATED_EVENT May 9 01:19:18.265860 containerd[1478]: time="2025-05-09T01:19:18.265772829Z" level=warning msg="container event discarded" container=24563d185a308463b04e82d6a4fdb2c55205d440f98cdaeafea67e4c10b40802 type=CONTAINER_STARTED_EVENT May 9 01:19:20.858413 systemd[1]: Started sshd@22-172.24.4.244:22-172.24.4.1:40194.service - OpenSSH per-connection server daemon (172.24.4.1:40194). May 9 01:19:21.101039 kubelet[2813]: E0509 01:19:21.100346 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:22.004676 containerd[1478]: time="2025-05-09T01:19:22.004446958Z" level=warning msg="container event discarded" container=f1025b0b7fdc94845aa778ef16fb151244feb1c1645615bbbed471100f2c530c type=CONTAINER_CREATED_EVENT May 9 01:19:22.004676 containerd[1478]: time="2025-05-09T01:19:22.004616817Z" level=warning msg="container event discarded" container=f1025b0b7fdc94845aa778ef16fb151244feb1c1645615bbbed471100f2c530c type=CONTAINER_STARTED_EVENT May 9 01:19:22.062649 containerd[1478]: time="2025-05-09T01:19:22.062379931Z" level=warning msg="container event discarded" container=bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523 type=CONTAINER_CREATED_EVENT May 9 01:19:22.062649 containerd[1478]: time="2025-05-09T01:19:22.062623419Z" level=warning msg="container event discarded" container=bb4e958ac2f8966ab22d3bd25922904836224c2f8b4b4c8ec505a1a077495523 type=CONTAINER_STARTED_EVENT May 9 01:19:22.128920 sshd[4636]: Accepted publickey for core from 172.24.4.1 port 40194 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:19:22.132878 sshd-session[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:19:22.147087 systemd-logind[1455]: New session 25 of user core. May 9 01:19:22.155396 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 01:19:22.926589 sshd[4638]: Connection closed by 172.24.4.1 port 40194 May 9 01:19:22.928273 sshd-session[4636]: pam_unix(sshd:session): session closed for user core May 9 01:19:22.938257 systemd[1]: sshd@22-172.24.4.244:22-172.24.4.1:40194.service: Deactivated successfully. May 9 01:19:22.944356 systemd[1]: session-25.scope: Deactivated successfully. May 9 01:19:22.946680 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. May 9 01:19:22.949824 systemd-logind[1455]: Removed session 25. May 9 01:19:25.517177 containerd[1478]: time="2025-05-09T01:19:25.517022043Z" level=warning msg="container event discarded" container=2553424bf3be122cc1b296f346c46d5157b214559b5cc20be9bf79b30171dc43 type=CONTAINER_CREATED_EVENT May 9 01:19:25.620787 containerd[1478]: time="2025-05-09T01:19:25.620668293Z" level=warning msg="container event discarded" container=2553424bf3be122cc1b296f346c46d5157b214559b5cc20be9bf79b30171dc43 type=CONTAINER_STARTED_EVENT May 9 01:19:26.100808 kubelet[2813]: E0509 01:19:26.100735 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:27.574911 containerd[1478]: time="2025-05-09T01:19:27.574619102Z" level=warning msg="container event discarded" container=7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55 type=CONTAINER_CREATED_EVENT May 9 01:19:27.668832 containerd[1478]: time="2025-05-09T01:19:27.668642323Z" level=warning msg="container event discarded" container=7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55 type=CONTAINER_STARTED_EVENT May 9 01:19:27.950588 systemd[1]: Started sshd@23-172.24.4.244:22-172.24.4.1:55004.service - OpenSSH per-connection server daemon (172.24.4.1:55004). May 9 01:19:28.462652 containerd[1478]: time="2025-05-09T01:19:28.462472337Z" level=warning msg="container event discarded" container=7d73a7b8fa4c9f7e2d47c8da9b2fd99dedb5e1c92c21d4efab5f320f61724e55 type=CONTAINER_STOPPED_EVENT May 9 01:19:29.305062 sshd[4651]: Accepted publickey for core from 172.24.4.1 port 55004 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:19:29.308743 sshd-session[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:19:29.324589 systemd-logind[1455]: New session 26 of user core. May 9 01:19:29.335513 systemd[1]: Started session-26.scope - Session 26 of User core. May 9 01:19:30.153791 sshd[4653]: Connection closed by 172.24.4.1 port 55004 May 9 01:19:30.155594 sshd-session[4651]: pam_unix(sshd:session): session closed for user core May 9 01:19:30.170420 systemd[1]: sshd@23-172.24.4.244:22-172.24.4.1:55004.service: Deactivated successfully. May 9 01:19:30.181183 systemd[1]: session-26.scope: Deactivated successfully. May 9 01:19:30.184302 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. May 9 01:19:30.187446 systemd-logind[1455]: Removed session 26. May 9 01:19:31.101641 kubelet[2813]: E0509 01:19:31.101543 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:35.052248 containerd[1478]: time="2025-05-09T01:19:35.051476719Z" level=warning msg="container event discarded" container=3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625 type=CONTAINER_CREATED_EVENT May 9 01:19:35.130222 containerd[1478]: time="2025-05-09T01:19:35.130100710Z" level=warning msg="container event discarded" container=3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625 type=CONTAINER_STARTED_EVENT May 9 01:19:35.186198 systemd[1]: Started sshd@24-172.24.4.244:22-172.24.4.1:56592.service - OpenSSH per-connection server daemon (172.24.4.1:56592). May 9 01:19:36.104900 kubelet[2813]: E0509 01:19:36.104519 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:36.207495 sshd[4678]: Accepted publickey for core from 172.24.4.1 port 56592 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:19:36.212211 sshd-session[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:19:36.229670 systemd-logind[1455]: New session 27 of user core. May 9 01:19:36.243558 systemd[1]: Started session-27.scope - Session 27 of User core. May 9 01:19:36.994320 sshd[4680]: Connection closed by 172.24.4.1 port 56592 May 9 01:19:36.997630 sshd-session[4678]: pam_unix(sshd:session): session closed for user core May 9 01:19:37.004138 systemd[1]: sshd@24-172.24.4.244:22-172.24.4.1:56592.service: Deactivated successfully. May 9 01:19:37.006627 systemd[1]: session-27.scope: Deactivated successfully. May 9 01:19:37.009919 systemd-logind[1455]: Session 27 logged out. Waiting for processes to exit. May 9 01:19:37.012781 systemd-logind[1455]: Removed session 27. May 9 01:19:37.209509 containerd[1478]: time="2025-05-09T01:19:37.209130346Z" level=warning msg="container event discarded" container=3a64acfd50bec0cf6166374c45edea9d8c1c1e9163afd9493a5f40187c675625 type=CONTAINER_STOPPED_EVENT May 9 01:19:38.821084 containerd[1478]: time="2025-05-09T01:19:38.820951786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"105fff9bfb05112b30431cb68cb6e10e829a5e8463f0dc1bfa83a0016ef09d0f\" pid:4704 exited_at:{seconds:1746753578 nanos:819220435}" May 9 01:19:41.107868 kubelet[2813]: E0509 01:19:41.107763 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:42.028741 systemd[1]: Started sshd@25-172.24.4.244:22-172.24.4.1:56608.service - OpenSSH per-connection server daemon (172.24.4.1:56608). May 9 01:19:43.147084 sshd[4717]: Accepted publickey for core from 172.24.4.1 port 56608 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:19:43.151291 sshd-session[4717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:19:43.168820 systemd-logind[1455]: New session 28 of user core. May 9 01:19:43.175412 systemd[1]: Started session-28.scope - Session 28 of User core. May 9 01:19:43.895443 sshd[4719]: Connection closed by 172.24.4.1 port 56608 May 9 01:19:43.896158 sshd-session[4717]: pam_unix(sshd:session): session closed for user core May 9 01:19:43.907634 systemd[1]: sshd@25-172.24.4.244:22-172.24.4.1:56608.service: Deactivated successfully. May 9 01:19:43.916348 systemd[1]: session-28.scope: Deactivated successfully. May 9 01:19:43.920789 systemd-logind[1455]: Session 28 logged out. Waiting for processes to exit. May 9 01:19:43.924589 systemd-logind[1455]: Removed session 28. May 9 01:19:46.108604 kubelet[2813]: E0509 01:19:46.108531 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:46.953932 containerd[1478]: time="2025-05-09T01:19:46.953789425Z" level=warning msg="container event discarded" container=3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73 type=CONTAINER_CREATED_EVENT May 9 01:19:47.070243 containerd[1478]: time="2025-05-09T01:19:47.070162670Z" level=warning msg="container event discarded" container=3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73 type=CONTAINER_STARTED_EVENT May 9 01:19:48.911249 systemd[1]: Started sshd@26-172.24.4.244:22-172.24.4.1:33138.service - OpenSSH per-connection server daemon (172.24.4.1:33138). May 9 01:19:50.086485 sshd[4736]: Accepted publickey for core from 172.24.4.1 port 33138 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:19:50.089852 sshd-session[4736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:19:50.102576 systemd-logind[1455]: New session 29 of user core. May 9 01:19:50.113302 systemd[1]: Started session-29.scope - Session 29 of User core. May 9 01:19:50.974196 sshd[4738]: Connection closed by 172.24.4.1 port 33138 May 9 01:19:50.974946 sshd-session[4736]: pam_unix(sshd:session): session closed for user core May 9 01:19:50.980347 systemd[1]: sshd@26-172.24.4.244:22-172.24.4.1:33138.service: Deactivated successfully. May 9 01:19:50.983930 systemd[1]: session-29.scope: Deactivated successfully. May 9 01:19:50.985140 systemd-logind[1455]: Session 29 logged out. Waiting for processes to exit. May 9 01:19:50.986195 systemd-logind[1455]: Removed session 29. May 9 01:19:51.109466 kubelet[2813]: E0509 01:19:51.109401 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:56.010904 systemd[1]: Started sshd@27-172.24.4.244:22-172.24.4.1:52518.service - OpenSSH per-connection server daemon (172.24.4.1:52518). May 9 01:19:56.111734 kubelet[2813]: E0509 01:19:56.111643 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:19:57.177280 sshd[4750]: Accepted publickey for core from 172.24.4.1 port 52518 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:19:57.179211 sshd-session[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:19:57.186279 systemd-logind[1455]: New session 30 of user core. May 9 01:19:57.193174 systemd[1]: Started session-30.scope - Session 30 of User core. May 9 01:19:58.007108 sshd[4752]: Connection closed by 172.24.4.1 port 52518 May 9 01:19:58.008635 sshd-session[4750]: pam_unix(sshd:session): session closed for user core May 9 01:19:58.018278 systemd[1]: sshd@27-172.24.4.244:22-172.24.4.1:52518.service: Deactivated successfully. May 9 01:19:58.024586 systemd[1]: session-30.scope: Deactivated successfully. May 9 01:19:58.027745 systemd-logind[1455]: Session 30 logged out. Waiting for processes to exit. May 9 01:19:58.030780 systemd-logind[1455]: Removed session 30. May 9 01:20:01.113423 kubelet[2813]: E0509 01:20:01.112674 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:03.041873 systemd[1]: Started sshd@28-172.24.4.244:22-172.24.4.1:52534.service - OpenSSH per-connection server daemon (172.24.4.1:52534). May 9 01:20:04.159675 sshd[4767]: Accepted publickey for core from 172.24.4.1 port 52534 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:20:04.164639 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:20:04.185623 systemd-logind[1455]: New session 31 of user core. May 9 01:20:04.193386 systemd[1]: Started session-31.scope - Session 31 of User core. May 9 01:20:04.907489 sshd[4769]: Connection closed by 172.24.4.1 port 52534 May 9 01:20:04.909321 sshd-session[4767]: pam_unix(sshd:session): session closed for user core May 9 01:20:04.919336 systemd[1]: sshd@28-172.24.4.244:22-172.24.4.1:52534.service: Deactivated successfully. May 9 01:20:04.926814 systemd[1]: session-31.scope: Deactivated successfully. May 9 01:20:04.932621 systemd-logind[1455]: Session 31 logged out. Waiting for processes to exit. May 9 01:20:04.935350 systemd-logind[1455]: Removed session 31. May 9 01:20:06.114589 kubelet[2813]: E0509 01:20:06.114436 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:08.830312 containerd[1478]: time="2025-05-09T01:20:08.829064603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"a0daa7e0294b858f4285aa102c54dd03c787c986ca9e3f8e5f2169b507719f0b\" pid:4795 exited_at:{seconds:1746753608 nanos:827960629}" May 9 01:20:09.933850 systemd[1]: Started sshd@29-172.24.4.244:22-172.24.4.1:54508.service - OpenSSH per-connection server daemon (172.24.4.1:54508). May 9 01:20:11.072068 sshd[4815]: Accepted publickey for core from 172.24.4.1 port 54508 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:20:11.075571 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:20:11.095407 systemd-logind[1455]: New session 32 of user core. May 9 01:20:11.105370 systemd[1]: Started session-32.scope - Session 32 of User core. May 9 01:20:11.115730 kubelet[2813]: E0509 01:20:11.115619 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:11.962562 sshd[4817]: Connection closed by 172.24.4.1 port 54508 May 9 01:20:11.964405 sshd-session[4815]: pam_unix(sshd:session): session closed for user core May 9 01:20:11.973304 systemd[1]: sshd@29-172.24.4.244:22-172.24.4.1:54508.service: Deactivated successfully. May 9 01:20:11.978400 systemd[1]: session-32.scope: Deactivated successfully. May 9 01:20:11.983908 systemd-logind[1455]: Session 32 logged out. Waiting for processes to exit. May 9 01:20:11.987490 systemd-logind[1455]: Removed session 32. May 9 01:20:16.116617 kubelet[2813]: E0509 01:20:16.116562 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:16.977275 systemd[1]: Started sshd@30-172.24.4.244:22-172.24.4.1:56684.service - OpenSSH per-connection server daemon (172.24.4.1:56684). May 9 01:20:18.123272 sshd[4832]: Accepted publickey for core from 172.24.4.1 port 56684 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:20:18.132894 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:20:18.142578 systemd-logind[1455]: New session 33 of user core. May 9 01:20:18.153236 systemd[1]: Started session-33.scope - Session 33 of User core. May 9 01:20:18.872719 sshd[4834]: Connection closed by 172.24.4.1 port 56684 May 9 01:20:18.873920 sshd-session[4832]: pam_unix(sshd:session): session closed for user core May 9 01:20:18.881774 systemd[1]: sshd@30-172.24.4.244:22-172.24.4.1:56684.service: Deactivated successfully. May 9 01:20:18.885377 systemd[1]: session-33.scope: Deactivated successfully. May 9 01:20:18.886971 systemd-logind[1455]: Session 33 logged out. Waiting for processes to exit. May 9 01:20:18.888753 systemd-logind[1455]: Removed session 33. May 9 01:20:21.118109 kubelet[2813]: E0509 01:20:21.117616 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:23.901040 systemd[1]: Started sshd@31-172.24.4.244:22-172.24.4.1:56148.service - OpenSSH per-connection server daemon (172.24.4.1:56148). May 9 01:20:25.101946 sshd[4847]: Accepted publickey for core from 172.24.4.1 port 56148 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:20:25.106662 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:20:25.117212 systemd-logind[1455]: New session 34 of user core. May 9 01:20:25.120283 systemd[1]: Started session-34.scope - Session 34 of User core. May 9 01:20:25.849858 sshd[4849]: Connection closed by 172.24.4.1 port 56148 May 9 01:20:25.852519 sshd-session[4847]: pam_unix(sshd:session): session closed for user core May 9 01:20:25.861196 systemd[1]: sshd@31-172.24.4.244:22-172.24.4.1:56148.service: Deactivated successfully. May 9 01:20:25.867883 systemd[1]: session-34.scope: Deactivated successfully. May 9 01:20:25.872524 systemd-logind[1455]: Session 34 logged out. Waiting for processes to exit. May 9 01:20:25.877410 systemd-logind[1455]: Removed session 34. May 9 01:20:26.118908 kubelet[2813]: E0509 01:20:26.118421 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:30.885228 systemd[1]: Started sshd@32-172.24.4.244:22-172.24.4.1:56164.service - OpenSSH per-connection server daemon (172.24.4.1:56164). May 9 01:20:31.121945 kubelet[2813]: E0509 01:20:31.121819 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:32.147203 sshd[4862]: Accepted publickey for core from 172.24.4.1 port 56164 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:20:32.151714 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:20:32.170046 systemd-logind[1455]: New session 35 of user core. May 9 01:20:32.181555 systemd[1]: Started session-35.scope - Session 35 of User core. May 9 01:20:32.894597 sshd[4864]: Connection closed by 172.24.4.1 port 56164 May 9 01:20:32.895258 sshd-session[4862]: pam_unix(sshd:session): session closed for user core May 9 01:20:32.902708 systemd[1]: sshd@32-172.24.4.244:22-172.24.4.1:56164.service: Deactivated successfully. May 9 01:20:32.909560 systemd[1]: session-35.scope: Deactivated successfully. May 9 01:20:32.910846 systemd-logind[1455]: Session 35 logged out. Waiting for processes to exit. May 9 01:20:32.912341 systemd-logind[1455]: Removed session 35. May 9 01:20:36.123731 kubelet[2813]: E0509 01:20:36.123416 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:37.926586 systemd[1]: Started sshd@33-172.24.4.244:22-172.24.4.1:46918.service - OpenSSH per-connection server daemon (172.24.4.1:46918). May 9 01:20:38.864639 containerd[1478]: time="2025-05-09T01:20:38.862850253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"af513305dfab2f07ec14a446908cbb032123990bd9fef1d975fa59e8ef91d225\" pid:4893 exited_at:{seconds:1746753638 nanos:861691157}" May 9 01:20:39.069844 sshd[4877]: Accepted publickey for core from 172.24.4.1 port 46918 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:20:39.077108 sshd-session[4877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:20:39.102211 systemd-logind[1455]: New session 36 of user core. May 9 01:20:39.108432 systemd[1]: Started session-36.scope - Session 36 of User core. May 9 01:20:39.961154 sshd[4905]: Connection closed by 172.24.4.1 port 46918 May 9 01:20:39.962781 sshd-session[4877]: pam_unix(sshd:session): session closed for user core May 9 01:20:39.972251 systemd[1]: sshd@33-172.24.4.244:22-172.24.4.1:46918.service: Deactivated successfully. May 9 01:20:39.977876 systemd[1]: session-36.scope: Deactivated successfully. May 9 01:20:39.981505 systemd-logind[1455]: Session 36 logged out. Waiting for processes to exit. May 9 01:20:39.985325 systemd-logind[1455]: Removed session 36. May 9 01:20:41.124595 kubelet[2813]: E0509 01:20:41.124505 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:44.996524 systemd[1]: Started sshd@34-172.24.4.244:22-172.24.4.1:60578.service - OpenSSH per-connection server daemon (172.24.4.1:60578). May 9 01:20:46.125107 kubelet[2813]: E0509 01:20:46.124718 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:46.128649 sshd[4918]: Accepted publickey for core from 172.24.4.1 port 60578 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:20:46.128515 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:20:46.135721 systemd-logind[1455]: New session 37 of user core. May 9 01:20:46.141212 systemd[1]: Started session-37.scope - Session 37 of User core. May 9 01:20:46.925436 sshd[4922]: Connection closed by 172.24.4.1 port 60578 May 9 01:20:46.927321 sshd-session[4918]: pam_unix(sshd:session): session closed for user core May 9 01:20:46.933311 systemd[1]: sshd@34-172.24.4.244:22-172.24.4.1:60578.service: Deactivated successfully. May 9 01:20:46.943914 systemd[1]: session-37.scope: Deactivated successfully. May 9 01:20:46.950048 systemd-logind[1455]: Session 37 logged out. Waiting for processes to exit. May 9 01:20:46.954856 systemd-logind[1455]: Removed session 37. May 9 01:20:51.125806 kubelet[2813]: E0509 01:20:51.125664 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:51.963667 systemd[1]: Started sshd@35-172.24.4.244:22-172.24.4.1:60594.service - OpenSSH per-connection server daemon (172.24.4.1:60594). May 9 01:20:53.085058 sshd[4935]: Accepted publickey for core from 172.24.4.1 port 60594 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:20:53.088450 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:20:53.104318 systemd-logind[1455]: New session 38 of user core. May 9 01:20:53.114341 systemd[1]: Started session-38.scope - Session 38 of User core. May 9 01:20:54.014916 sshd[4937]: Connection closed by 172.24.4.1 port 60594 May 9 01:20:54.014086 sshd-session[4935]: pam_unix(sshd:session): session closed for user core May 9 01:20:54.019556 systemd[1]: sshd@35-172.24.4.244:22-172.24.4.1:60594.service: Deactivated successfully. May 9 01:20:54.021788 systemd[1]: session-38.scope: Deactivated successfully. May 9 01:20:54.024284 systemd-logind[1455]: Session 38 logged out. Waiting for processes to exit. May 9 01:20:54.026071 systemd-logind[1455]: Removed session 38. May 9 01:20:56.126899 kubelet[2813]: E0509 01:20:56.126812 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:20:58.857359 kubelet[2813]: E0509 01:20:58.857052 2813 remote_runtime.go:633] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:20:58.857359 kubelet[2813]: E0509 01:20:58.857175 2813 kubelet.go:2885] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:20:59.043334 systemd[1]: Started sshd@36-172.24.4.244:22-172.24.4.1:36218.service - OpenSSH per-connection server daemon (172.24.4.1:36218). May 9 01:21:00.064063 sshd[4952]: Accepted publickey for core from 172.24.4.1 port 36218 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:21:00.067578 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:21:00.087301 systemd-logind[1455]: New session 39 of user core. May 9 01:21:00.095352 systemd[1]: Started session-39.scope - Session 39 of User core. May 9 01:21:00.952016 sshd[4954]: Connection closed by 172.24.4.1 port 36218 May 9 01:21:00.954108 sshd-session[4952]: pam_unix(sshd:session): session closed for user core May 9 01:21:00.961717 systemd-logind[1455]: Session 39 logged out. Waiting for processes to exit. May 9 01:21:00.963512 systemd[1]: sshd@36-172.24.4.244:22-172.24.4.1:36218.service: Deactivated successfully. May 9 01:21:00.970951 systemd[1]: session-39.scope: Deactivated successfully. May 9 01:21:00.977030 systemd-logind[1455]: Removed session 39. May 9 01:21:01.127166 kubelet[2813]: E0509 01:21:01.127073 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:05.994647 systemd[1]: Started sshd@37-172.24.4.244:22-172.24.4.1:56632.service - OpenSSH per-connection server daemon (172.24.4.1:56632). May 9 01:21:06.127551 kubelet[2813]: E0509 01:21:06.127473 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:07.163463 sshd[4974]: Accepted publickey for core from 172.24.4.1 port 56632 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:21:07.168258 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:21:07.185776 systemd-logind[1455]: New session 40 of user core. May 9 01:21:07.191723 systemd[1]: Started session-40.scope - Session 40 of User core. May 9 01:21:07.911435 sshd[4976]: Connection closed by 172.24.4.1 port 56632 May 9 01:21:07.911119 sshd-session[4974]: pam_unix(sshd:session): session closed for user core May 9 01:21:07.922365 systemd-logind[1455]: Session 40 logged out. Waiting for processes to exit. May 9 01:21:07.922937 systemd[1]: sshd@37-172.24.4.244:22-172.24.4.1:56632.service: Deactivated successfully. May 9 01:21:07.930632 systemd[1]: session-40.scope: Deactivated successfully. May 9 01:21:07.934891 systemd-logind[1455]: Removed session 40. May 9 01:21:08.832052 containerd[1478]: time="2025-05-09T01:21:08.831570072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"860c763c70dcd590bffd60901e0e0b0de1f396aaf23f9c0db7e56369c16e3f61\" pid:5001 exited_at:{seconds:1746753668 nanos:830713362}" May 9 01:21:11.128518 kubelet[2813]: E0509 01:21:11.128438 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:12.932355 systemd[1]: Started sshd@38-172.24.4.244:22-172.24.4.1:56648.service - OpenSSH per-connection server daemon (172.24.4.1:56648). May 9 01:21:14.077567 sshd[5019]: Accepted publickey for core from 172.24.4.1 port 56648 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:21:14.084113 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:21:14.098098 systemd-logind[1455]: New session 41 of user core. May 9 01:21:14.105445 systemd[1]: Started session-41.scope - Session 41 of User core. May 9 01:21:14.867394 sshd[5021]: Connection closed by 172.24.4.1 port 56648 May 9 01:21:14.866695 sshd-session[5019]: pam_unix(sshd:session): session closed for user core May 9 01:21:14.874293 systemd[1]: sshd@38-172.24.4.244:22-172.24.4.1:56648.service: Deactivated successfully. May 9 01:21:14.879763 systemd[1]: session-41.scope: Deactivated successfully. May 9 01:21:14.883728 systemd-logind[1455]: Session 41 logged out. Waiting for processes to exit. May 9 01:21:14.886218 systemd-logind[1455]: Removed session 41. May 9 01:21:16.129111 kubelet[2813]: E0509 01:21:16.128960 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:19.881098 systemd[1]: Started sshd@39-172.24.4.244:22-172.24.4.1:42352.service - OpenSSH per-connection server daemon (172.24.4.1:42352). May 9 01:21:21.095066 sshd[5036]: Accepted publickey for core from 172.24.4.1 port 42352 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:21:21.099348 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:21:21.113700 systemd-logind[1455]: New session 42 of user core. May 9 01:21:21.121335 systemd[1]: Started session-42.scope - Session 42 of User core. May 9 01:21:21.130192 kubelet[2813]: E0509 01:21:21.130086 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:21.883857 sshd[5038]: Connection closed by 172.24.4.1 port 42352 May 9 01:21:21.884632 sshd-session[5036]: pam_unix(sshd:session): session closed for user core May 9 01:21:21.891912 systemd[1]: sshd@39-172.24.4.244:22-172.24.4.1:42352.service: Deactivated successfully. May 9 01:21:21.900633 systemd[1]: session-42.scope: Deactivated successfully. May 9 01:21:21.903133 systemd-logind[1455]: Session 42 logged out. Waiting for processes to exit. May 9 01:21:21.905521 systemd-logind[1455]: Removed session 42. May 9 01:21:26.131037 kubelet[2813]: E0509 01:21:26.130943 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:26.917213 systemd[1]: Started sshd@40-172.24.4.244:22-172.24.4.1:42296.service - OpenSSH per-connection server daemon (172.24.4.1:42296). May 9 01:21:28.111602 sshd[5051]: Accepted publickey for core from 172.24.4.1 port 42296 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:21:28.115829 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:21:28.132105 systemd-logind[1455]: New session 43 of user core. May 9 01:21:28.139391 systemd[1]: Started session-43.scope - Session 43 of User core. May 9 01:21:28.901991 sshd[5053]: Connection closed by 172.24.4.1 port 42296 May 9 01:21:28.902764 sshd-session[5051]: pam_unix(sshd:session): session closed for user core May 9 01:21:28.907777 systemd-logind[1455]: Session 43 logged out. Waiting for processes to exit. May 9 01:21:28.908383 systemd[1]: sshd@40-172.24.4.244:22-172.24.4.1:42296.service: Deactivated successfully. May 9 01:21:28.912192 systemd[1]: session-43.scope: Deactivated successfully. May 9 01:21:28.914442 systemd-logind[1455]: Removed session 43. May 9 01:21:31.131736 kubelet[2813]: E0509 01:21:31.131655 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:33.930463 systemd[1]: Started sshd@41-172.24.4.244:22-172.24.4.1:38282.service - OpenSSH per-connection server daemon (172.24.4.1:38282). May 9 01:21:35.068546 sshd[5066]: Accepted publickey for core from 172.24.4.1 port 38282 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:21:35.075606 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:21:35.091433 systemd-logind[1455]: New session 44 of user core. May 9 01:21:35.098301 systemd[1]: Started session-44.scope - Session 44 of User core. May 9 01:21:35.959201 sshd[5068]: Connection closed by 172.24.4.1 port 38282 May 9 01:21:35.960660 sshd-session[5066]: pam_unix(sshd:session): session closed for user core May 9 01:21:35.969638 systemd[1]: sshd@41-172.24.4.244:22-172.24.4.1:38282.service: Deactivated successfully. May 9 01:21:35.975382 systemd[1]: session-44.scope: Deactivated successfully. May 9 01:21:35.978377 systemd-logind[1455]: Session 44 logged out. Waiting for processes to exit. May 9 01:21:35.981115 systemd-logind[1455]: Removed session 44. May 9 01:21:36.132169 kubelet[2813]: E0509 01:21:36.132081 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:38.797309 containerd[1478]: time="2025-05-09T01:21:38.797210097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"5a14b92d9a7bb3bca7f5d4ef9d3c140531e855bd4159ad41de93abf420fae7d5\" pid:5092 exited_at:{seconds:1746753698 nanos:796479644}" May 9 01:21:40.982179 systemd[1]: Started sshd@42-172.24.4.244:22-172.24.4.1:38284.service - OpenSSH per-connection server daemon (172.24.4.1:38284). May 9 01:21:41.132524 kubelet[2813]: E0509 01:21:41.132408 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:42.164447 sshd[5106]: Accepted publickey for core from 172.24.4.1 port 38284 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:21:42.167600 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:21:42.183124 systemd-logind[1455]: New session 45 of user core. May 9 01:21:42.193310 systemd[1]: Started session-45.scope - Session 45 of User core. May 9 01:21:42.913086 sshd[5109]: Connection closed by 172.24.4.1 port 38284 May 9 01:21:42.913761 sshd-session[5106]: pam_unix(sshd:session): session closed for user core May 9 01:21:42.922724 systemd[1]: sshd@42-172.24.4.244:22-172.24.4.1:38284.service: Deactivated successfully. May 9 01:21:42.927958 systemd[1]: session-45.scope: Deactivated successfully. May 9 01:21:42.932071 systemd-logind[1455]: Session 45 logged out. Waiting for processes to exit. May 9 01:21:42.935282 systemd-logind[1455]: Removed session 45. May 9 01:21:46.133138 kubelet[2813]: E0509 01:21:46.133088 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:47.937180 systemd[1]: Started sshd@43-172.24.4.244:22-172.24.4.1:48492.service - OpenSSH per-connection server daemon (172.24.4.1:48492). May 9 01:21:49.114217 sshd[5124]: Accepted publickey for core from 172.24.4.1 port 48492 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:21:49.118898 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:21:49.133535 systemd-logind[1455]: New session 46 of user core. May 9 01:21:49.141139 systemd[1]: Started session-46.scope - Session 46 of User core. May 9 01:21:50.049070 sshd[5127]: Connection closed by 172.24.4.1 port 48492 May 9 01:21:50.051047 sshd-session[5124]: pam_unix(sshd:session): session closed for user core May 9 01:21:50.071471 systemd[1]: sshd@43-172.24.4.244:22-172.24.4.1:48492.service: Deactivated successfully. May 9 01:21:50.084282 systemd[1]: session-46.scope: Deactivated successfully. May 9 01:21:50.093622 systemd-logind[1455]: Session 46 logged out. Waiting for processes to exit. May 9 01:21:50.099911 systemd-logind[1455]: Removed session 46. May 9 01:21:51.133712 kubelet[2813]: E0509 01:21:51.133623 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:55.082099 systemd[1]: Started sshd@44-172.24.4.244:22-172.24.4.1:42028.service - OpenSSH per-connection server daemon (172.24.4.1:42028). May 9 01:21:56.134301 kubelet[2813]: E0509 01:21:56.134205 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:21:56.198966 sshd[5140]: Accepted publickey for core from 172.24.4.1 port 42028 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:21:56.206218 sshd-session[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:21:56.226535 systemd-logind[1455]: New session 47 of user core. May 9 01:21:56.235332 systemd[1]: Started session-47.scope - Session 47 of User core. May 9 01:21:56.988087 sshd[5142]: Connection closed by 172.24.4.1 port 42028 May 9 01:21:56.989503 sshd-session[5140]: pam_unix(sshd:session): session closed for user core May 9 01:21:57.014589 systemd[1]: sshd@44-172.24.4.244:22-172.24.4.1:42028.service: Deactivated successfully. May 9 01:21:57.021946 systemd[1]: session-47.scope: Deactivated successfully. May 9 01:21:57.030404 systemd-logind[1455]: Session 47 logged out. Waiting for processes to exit. May 9 01:21:57.036237 systemd[1]: Started sshd@45-172.24.4.244:22-172.24.4.1:42030.service - OpenSSH per-connection server daemon (172.24.4.1:42030). May 9 01:21:57.042120 systemd-logind[1455]: Removed session 47. May 9 01:21:58.176310 sshd[5154]: Accepted publickey for core from 172.24.4.1 port 42030 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:21:58.178853 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:21:58.187886 systemd-logind[1455]: New session 48 of user core. May 9 01:21:58.195162 systemd[1]: Started session-48.scope - Session 48 of User core. May 9 01:21:59.019764 sshd[5157]: Connection closed by 172.24.4.1 port 42030 May 9 01:21:59.019530 sshd-session[5154]: pam_unix(sshd:session): session closed for user core May 9 01:21:59.031191 systemd[1]: sshd@45-172.24.4.244:22-172.24.4.1:42030.service: Deactivated successfully. May 9 01:21:59.033527 systemd[1]: session-48.scope: Deactivated successfully. May 9 01:21:59.038182 systemd-logind[1455]: Session 48 logged out. Waiting for processes to exit. May 9 01:21:59.041378 systemd[1]: Started sshd@46-172.24.4.244:22-172.24.4.1:42038.service - OpenSSH per-connection server daemon (172.24.4.1:42038). May 9 01:21:59.045389 systemd-logind[1455]: Removed session 48. May 9 01:22:00.359379 sshd[5168]: Accepted publickey for core from 172.24.4.1 port 42038 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:22:00.362687 sshd-session[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:22:00.376132 systemd-logind[1455]: New session 49 of user core. May 9 01:22:00.384381 systemd[1]: Started session-49.scope - Session 49 of User core. May 9 01:22:01.070037 sshd[5171]: Connection closed by 172.24.4.1 port 42038 May 9 01:22:01.069776 sshd-session[5168]: pam_unix(sshd:session): session closed for user core May 9 01:22:01.083497 systemd[1]: sshd@46-172.24.4.244:22-172.24.4.1:42038.service: Deactivated successfully. May 9 01:22:01.091553 systemd[1]: session-49.scope: Deactivated successfully. May 9 01:22:01.099482 systemd-logind[1455]: Session 49 logged out. Waiting for processes to exit. May 9 01:22:01.102907 systemd-logind[1455]: Removed session 49. May 9 01:22:01.134614 kubelet[2813]: E0509 01:22:01.134453 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:22:06.101309 systemd[1]: Started sshd@47-172.24.4.244:22-172.24.4.1:49876.service - OpenSSH per-connection server daemon (172.24.4.1:49876). May 9 01:22:06.136238 kubelet[2813]: E0509 01:22:06.136117 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:22:07.261697 sshd[5183]: Accepted publickey for core from 172.24.4.1 port 49876 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:22:07.265942 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:22:07.282380 systemd-logind[1455]: New session 50 of user core. May 9 01:22:07.292304 systemd[1]: Started session-50.scope - Session 50 of User core. May 9 01:22:07.887034 sshd[5185]: Connection closed by 172.24.4.1 port 49876 May 9 01:22:07.886366 sshd-session[5183]: pam_unix(sshd:session): session closed for user core May 9 01:22:07.906282 systemd[1]: sshd@47-172.24.4.244:22-172.24.4.1:49876.service: Deactivated successfully. May 9 01:22:07.911158 systemd[1]: session-50.scope: Deactivated successfully. May 9 01:22:07.914584 systemd-logind[1455]: Session 50 logged out. Waiting for processes to exit. May 9 01:22:07.920545 systemd[1]: Started sshd@48-172.24.4.244:22-172.24.4.1:49884.service - OpenSSH per-connection server daemon (172.24.4.1:49884). May 9 01:22:07.925108 systemd-logind[1455]: Removed session 50. May 9 01:22:08.788834 containerd[1478]: time="2025-05-09T01:22:08.788657217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"136056928e28b226cffc024cf44d24e19dcb7bf731817e68ae0ad0bc4dc26c55\" pid:5211 exited_at:{seconds:1746753728 nanos:785211965}" May 9 01:22:09.322357 sshd[5196]: Accepted publickey for core from 172.24.4.1 port 49884 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:22:09.325254 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:22:09.337074 systemd-logind[1455]: New session 51 of user core. May 9 01:22:09.345280 systemd[1]: Started session-51.scope - Session 51 of User core. May 9 01:22:10.375085 sshd[5222]: Connection closed by 172.24.4.1 port 49884 May 9 01:22:10.373515 sshd-session[5196]: pam_unix(sshd:session): session closed for user core May 9 01:22:10.398478 systemd[1]: sshd@48-172.24.4.244:22-172.24.4.1:49884.service: Deactivated successfully. May 9 01:22:10.405766 systemd[1]: session-51.scope: Deactivated successfully. May 9 01:22:10.412325 systemd-logind[1455]: Session 51 logged out. Waiting for processes to exit. May 9 01:22:10.419224 systemd[1]: Started sshd@49-172.24.4.244:22-172.24.4.1:49898.service - OpenSSH per-connection server daemon (172.24.4.1:49898). May 9 01:22:10.424820 systemd-logind[1455]: Removed session 51. May 9 01:22:11.137126 kubelet[2813]: E0509 01:22:11.137017 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:22:11.584199 sshd[5231]: Accepted publickey for core from 172.24.4.1 port 49898 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:22:11.588843 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:22:11.603615 systemd-logind[1455]: New session 52 of user core. May 9 01:22:11.608427 systemd[1]: Started session-52.scope - Session 52 of User core. May 9 01:22:15.601132 sshd[5234]: Connection closed by 172.24.4.1 port 49898 May 9 01:22:15.604026 sshd-session[5231]: pam_unix(sshd:session): session closed for user core May 9 01:22:15.619146 systemd[1]: sshd@49-172.24.4.244:22-172.24.4.1:49898.service: Deactivated successfully. May 9 01:22:15.623171 systemd[1]: session-52.scope: Deactivated successfully. May 9 01:22:15.623860 systemd[1]: session-52.scope: Consumed 997ms CPU time, 66.2M memory peak. May 9 01:22:15.625702 systemd-logind[1455]: Session 52 logged out. Waiting for processes to exit. May 9 01:22:15.631830 systemd[1]: Started sshd@50-172.24.4.244:22-172.24.4.1:35828.service - OpenSSH per-connection server daemon (172.24.4.1:35828). May 9 01:22:15.635043 systemd-logind[1455]: Removed session 52. May 9 01:22:16.138440 kubelet[2813]: E0509 01:22:16.138285 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:22:17.066500 sshd[5252]: Accepted publickey for core from 172.24.4.1 port 35828 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:22:17.068467 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:22:17.080068 systemd-logind[1455]: New session 53 of user core. May 9 01:22:17.083150 systemd[1]: Started session-53.scope - Session 53 of User core. May 9 01:22:18.043009 sshd[5257]: Connection closed by 172.24.4.1 port 35828 May 9 01:22:18.044574 sshd-session[5252]: pam_unix(sshd:session): session closed for user core May 9 01:22:18.066360 systemd[1]: sshd@50-172.24.4.244:22-172.24.4.1:35828.service: Deactivated successfully. May 9 01:22:18.074916 systemd[1]: session-53.scope: Deactivated successfully. May 9 01:22:18.078239 systemd-logind[1455]: Session 53 logged out. Waiting for processes to exit. May 9 01:22:18.084520 systemd[1]: Started sshd@51-172.24.4.244:22-172.24.4.1:35840.service - OpenSSH per-connection server daemon (172.24.4.1:35840). May 9 01:22:18.088587 systemd-logind[1455]: Removed session 53. May 9 01:22:19.312006 sshd[5266]: Accepted publickey for core from 172.24.4.1 port 35840 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:22:19.313545 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:22:19.323226 systemd-logind[1455]: New session 54 of user core. May 9 01:22:19.331137 systemd[1]: Started session-54.scope - Session 54 of User core. May 9 01:22:20.199708 sshd[5269]: Connection closed by 172.24.4.1 port 35840 May 9 01:22:20.200569 sshd-session[5266]: pam_unix(sshd:session): session closed for user core May 9 01:22:20.204095 systemd[1]: sshd@51-172.24.4.244:22-172.24.4.1:35840.service: Deactivated successfully. May 9 01:22:20.206752 systemd[1]: session-54.scope: Deactivated successfully. May 9 01:22:20.208643 systemd-logind[1455]: Session 54 logged out. Waiting for processes to exit. May 9 01:22:20.209681 systemd-logind[1455]: Removed session 54. May 9 01:22:21.138768 kubelet[2813]: E0509 01:22:21.138625 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:22:25.232510 systemd[1]: Started sshd@52-172.24.4.244:22-172.24.4.1:52752.service - OpenSSH per-connection server daemon (172.24.4.1:52752). May 9 01:22:26.139191 kubelet[2813]: E0509 01:22:26.139140 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:22:26.365066 sshd[5286]: Accepted publickey for core from 172.24.4.1 port 52752 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:22:26.366618 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:22:26.379770 systemd-logind[1455]: New session 55 of user core. May 9 01:22:26.388315 systemd[1]: Started session-55.scope - Session 55 of User core. May 9 01:22:27.145220 sshd[5288]: Connection closed by 172.24.4.1 port 52752 May 9 01:22:27.146169 sshd-session[5286]: pam_unix(sshd:session): session closed for user core May 9 01:22:27.151965 systemd[1]: sshd@52-172.24.4.244:22-172.24.4.1:52752.service: Deactivated successfully. May 9 01:22:27.157316 systemd[1]: session-55.scope: Deactivated successfully. May 9 01:22:27.158990 systemd-logind[1455]: Session 55 logged out. Waiting for processes to exit. May 9 01:22:27.160922 systemd-logind[1455]: Removed session 55. May 9 01:22:31.140186 kubelet[2813]: E0509 01:22:31.140110 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:22:32.178546 systemd[1]: Started sshd@53-172.24.4.244:22-172.24.4.1:52764.service - OpenSSH per-connection server daemon (172.24.4.1:52764). May 9 01:22:33.485576 sshd[5300]: Accepted publickey for core from 172.24.4.1 port 52764 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:22:33.489331 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:22:33.502509 systemd-logind[1455]: New session 56 of user core. May 9 01:22:33.514409 systemd[1]: Started session-56.scope - Session 56 of User core. May 9 01:22:34.378556 sshd[5302]: Connection closed by 172.24.4.1 port 52764 May 9 01:22:34.379968 sshd-session[5300]: pam_unix(sshd:session): session closed for user core May 9 01:22:34.391087 systemd[1]: sshd@53-172.24.4.244:22-172.24.4.1:52764.service: Deactivated successfully. May 9 01:22:34.398611 systemd[1]: session-56.scope: Deactivated successfully. May 9 01:22:34.401548 systemd-logind[1455]: Session 56 logged out. Waiting for processes to exit. May 9 01:22:34.404395 systemd-logind[1455]: Removed session 56. May 9 01:22:36.140355 kubelet[2813]: E0509 01:22:36.140284 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:22:38.804344 containerd[1478]: time="2025-05-09T01:22:38.803737155Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3096f540bfca5769f8c1daf1a2e200cca087323051e7f64b89860bb746256a73\" id:\"8941ee44960184e2fc82d74e247327b7c09b410034cd1f3fee5a464fae6991cd\" pid:5333 exited_at:{seconds:1746753758 nanos:803257875}" May 9 01:22:39.417354 systemd[1]: Started sshd@54-172.24.4.244:22-172.24.4.1:54436.service - OpenSSH per-connection server daemon (172.24.4.1:54436). May 9 01:22:41.031348 sshd[5346]: Accepted publickey for core from 172.24.4.1 port 54436 ssh2: RSA SHA256:WJyoLV1Y2PoJ7+R3QaItjWjFcUx1ollMA7+rtohHwe4 May 9 01:22:41.035313 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:22:41.050710 systemd-logind[1455]: New session 57 of user core. May 9 01:22:41.058541 systemd[1]: Started session-57.scope - Session 57 of User core. May 9 01:22:41.141376 kubelet[2813]: E0509 01:22:41.141047 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:22:41.871105 sshd[5348]: Connection closed by 172.24.4.1 port 54436 May 9 01:22:41.873420 sshd-session[5346]: pam_unix(sshd:session): session closed for user core May 9 01:22:41.888187 systemd[1]: sshd@54-172.24.4.244:22-172.24.4.1:54436.service: Deactivated successfully. May 9 01:22:41.897511 systemd[1]: session-57.scope: Deactivated successfully. May 9 01:22:41.901144 systemd-logind[1455]: Session 57 logged out. Waiting for processes to exit. May 9 01:22:41.904204 systemd-logind[1455]: Removed session 57. May 9 01:22:46.143579 kubelet[2813]: E0509 01:22:46.143396 2813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down"