May 9 01:40:29.066947 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu May 8 22:15:16 -00 2025 May 9 01:40:29.066978 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6dbb211661f4d09f7718fdc7eab00f1550a8baafb68f4d2efdaedafa102351ae May 9 01:40:29.066988 kernel: BIOS-provided physical RAM map: May 9 01:40:29.066996 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 9 01:40:29.067004 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 9 01:40:29.067014 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 9 01:40:29.067023 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 9 01:40:29.067031 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 9 01:40:29.067038 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 9 01:40:29.067046 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 9 01:40:29.067054 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 9 01:40:29.067062 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 9 01:40:29.067069 kernel: NX (Execute Disable) protection: active May 9 01:40:29.067077 kernel: APIC: Static calls initialized May 9 01:40:29.067088 kernel: SMBIOS 3.0.0 present. May 9 01:40:29.067097 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 9 01:40:29.067105 kernel: Hypervisor detected: KVM May 9 01:40:29.067113 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 9 01:40:29.067121 kernel: kvm-clock: using sched offset of 3771085145 cycles May 9 01:40:29.067130 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 9 01:40:29.067140 kernel: tsc: Detected 1996.249 MHz processor May 9 01:40:29.067149 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 9 01:40:29.067157 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 9 01:40:29.067166 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 9 01:40:29.067174 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 9 01:40:29.067183 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 9 01:40:29.067191 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 9 01:40:29.067199 kernel: ACPI: Early table checksum verification disabled May 9 01:40:29.067209 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 9 01:40:29.067218 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 01:40:29.067226 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 01:40:29.067234 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 01:40:29.067242 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 9 01:40:29.067251 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 9 01:40:29.067259 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 9 01:40:29.067267 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 9 01:40:29.067276 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 9 01:40:29.067286 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 9 01:40:29.067294 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 9 01:40:29.067303 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 9 01:40:29.067314 kernel: No NUMA configuration found May 9 01:40:29.067323 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 9 01:40:29.067331 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 9 01:40:29.067340 kernel: Zone ranges: May 9 01:40:29.067350 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 9 01:40:29.067359 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 9 01:40:29.067368 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 9 01:40:29.067376 kernel: Movable zone start for each node May 9 01:40:29.067385 kernel: Early memory node ranges May 9 01:40:29.067393 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 9 01:40:29.067402 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 9 01:40:29.067410 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 9 01:40:29.067421 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 9 01:40:29.067430 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 9 01:40:29.067438 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 9 01:40:29.067447 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 9 01:40:29.067456 kernel: ACPI: PM-Timer IO Port: 0x608 May 9 01:40:29.067464 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 9 01:40:29.067473 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 9 01:40:29.067482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 9 01:40:29.067490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 9 01:40:29.067501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 9 01:40:29.067509 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 9 01:40:29.067518 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 9 01:40:29.067526 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 9 01:40:29.067535 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 9 01:40:29.067544 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 9 01:40:29.067552 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 9 01:40:29.067561 kernel: Booting paravirtualized kernel on KVM May 9 01:40:29.067570 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 9 01:40:29.067581 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 9 01:40:29.067590 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 9 01:40:29.067598 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 9 01:40:29.067607 kernel: pcpu-alloc: [0] 0 1 May 9 01:40:29.067615 kernel: kvm-guest: PV spinlocks disabled, no host support May 9 01:40:29.067625 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6dbb211661f4d09f7718fdc7eab00f1550a8baafb68f4d2efdaedafa102351ae May 9 01:40:29.067634 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 01:40:29.067643 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 01:40:29.067654 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 01:40:29.067663 kernel: Fallback order for Node 0: 0 May 9 01:40:29.067672 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 9 01:40:29.067680 kernel: Policy zone: Normal May 9 01:40:29.067689 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 01:40:29.067697 kernel: software IO TLB: area num 2. May 9 01:40:29.067706 kernel: Memory: 3962108K/4193772K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 231404K reserved, 0K cma-reserved) May 9 01:40:29.067715 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 01:40:29.067724 kernel: ftrace: allocating 37993 entries in 149 pages May 9 01:40:29.067735 kernel: ftrace: allocated 149 pages with 4 groups May 9 01:40:29.067743 kernel: Dynamic Preempt: voluntary May 9 01:40:29.067752 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 01:40:29.067761 kernel: rcu: RCU event tracing is enabled. May 9 01:40:29.067770 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 01:40:29.067778 kernel: Trampoline variant of Tasks RCU enabled. May 9 01:40:29.068477 kernel: Rude variant of Tasks RCU enabled. May 9 01:40:29.068489 kernel: Tracing variant of Tasks RCU enabled. May 9 01:40:29.068498 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 01:40:29.068511 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 01:40:29.068520 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 9 01:40:29.068528 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 01:40:29.068537 kernel: Console: colour VGA+ 80x25 May 9 01:40:29.068545 kernel: printk: console [tty0] enabled May 9 01:40:29.068554 kernel: printk: console [ttyS0] enabled May 9 01:40:29.068562 kernel: ACPI: Core revision 20230628 May 9 01:40:29.068571 kernel: APIC: Switch to symmetric I/O mode setup May 9 01:40:29.068579 kernel: x2apic enabled May 9 01:40:29.068590 kernel: APIC: Switched APIC routing to: physical x2apic May 9 01:40:29.068598 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 9 01:40:29.068607 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 9 01:40:29.068616 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 9 01:40:29.068624 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 9 01:40:29.068650 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 9 01:40:29.068668 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 9 01:40:29.068677 kernel: Spectre V2 : Mitigation: Retpolines May 9 01:40:29.068686 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 9 01:40:29.068697 kernel: Speculative Store Bypass: Vulnerable May 9 01:40:29.068706 kernel: x86/fpu: x87 FPU will use FXSAVE May 9 01:40:29.068714 kernel: Freeing SMP alternatives memory: 32K May 9 01:40:29.068723 kernel: pid_max: default: 32768 minimum: 301 May 9 01:40:29.068738 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 01:40:29.068748 kernel: landlock: Up and running. May 9 01:40:29.068757 kernel: SELinux: Initializing. May 9 01:40:29.068766 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 01:40:29.068775 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 01:40:29.068797 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 9 01:40:29.068820 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 01:40:29.068829 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 01:40:29.068841 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 01:40:29.068850 kernel: Performance Events: AMD PMU driver. May 9 01:40:29.068859 kernel: ... version: 0 May 9 01:40:29.068868 kernel: ... bit width: 48 May 9 01:40:29.068877 kernel: ... generic registers: 4 May 9 01:40:29.068888 kernel: ... value mask: 0000ffffffffffff May 9 01:40:29.068897 kernel: ... max period: 00007fffffffffff May 9 01:40:29.068906 kernel: ... fixed-purpose events: 0 May 9 01:40:29.068915 kernel: ... event mask: 000000000000000f May 9 01:40:29.068924 kernel: signal: max sigframe size: 1440 May 9 01:40:29.068933 kernel: rcu: Hierarchical SRCU implementation. May 9 01:40:29.068942 kernel: rcu: Max phase no-delay instances is 400. May 9 01:40:29.068951 kernel: smp: Bringing up secondary CPUs ... May 9 01:40:29.068960 kernel: smpboot: x86: Booting SMP configuration: May 9 01:40:29.068971 kernel: .... node #0, CPUs: #1 May 9 01:40:29.068979 kernel: smp: Brought up 1 node, 2 CPUs May 9 01:40:29.068988 kernel: smpboot: Max logical packages: 2 May 9 01:40:29.068997 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 9 01:40:29.069006 kernel: devtmpfs: initialized May 9 01:40:29.069015 kernel: x86/mm: Memory block size: 128MB May 9 01:40:29.069024 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 01:40:29.069033 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 01:40:29.069042 kernel: pinctrl core: initialized pinctrl subsystem May 9 01:40:29.069053 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 01:40:29.069062 kernel: audit: initializing netlink subsys (disabled) May 9 01:40:29.069072 kernel: audit: type=2000 audit(1746754828.476:1): state=initialized audit_enabled=0 res=1 May 9 01:40:29.069080 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 01:40:29.069089 kernel: thermal_sys: Registered thermal governor 'user_space' May 9 01:40:29.069098 kernel: cpuidle: using governor menu May 9 01:40:29.069107 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 01:40:29.069116 kernel: dca service started, version 1.12.1 May 9 01:40:29.069125 kernel: PCI: Using configuration type 1 for base access May 9 01:40:29.069136 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 9 01:40:29.069145 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 01:40:29.069154 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 9 01:40:29.069163 kernel: ACPI: Added _OSI(Module Device) May 9 01:40:29.069172 kernel: ACPI: Added _OSI(Processor Device) May 9 01:40:29.069181 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 01:40:29.069190 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 01:40:29.069199 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 01:40:29.069208 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 9 01:40:29.069219 kernel: ACPI: Interpreter enabled May 9 01:40:29.069228 kernel: ACPI: PM: (supports S0 S3 S5) May 9 01:40:29.069237 kernel: ACPI: Using IOAPIC for interrupt routing May 9 01:40:29.069246 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 9 01:40:29.069255 kernel: PCI: Using E820 reservations for host bridge windows May 9 01:40:29.069264 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 9 01:40:29.069273 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 9 01:40:29.069420 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 9 01:40:29.069522 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 9 01:40:29.069617 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 9 01:40:29.069631 kernel: acpiphp: Slot [3] registered May 9 01:40:29.069640 kernel: acpiphp: Slot [4] registered May 9 01:40:29.069649 kernel: acpiphp: Slot [5] registered May 9 01:40:29.069658 kernel: acpiphp: Slot [6] registered May 9 01:40:29.069667 kernel: acpiphp: Slot [7] registered May 9 01:40:29.069676 kernel: acpiphp: Slot [8] registered May 9 01:40:29.069684 kernel: acpiphp: Slot [9] registered May 9 01:40:29.069697 kernel: acpiphp: Slot [10] registered May 9 01:40:29.069706 kernel: acpiphp: Slot [11] registered May 9 01:40:29.069715 kernel: acpiphp: Slot [12] registered May 9 01:40:29.069723 kernel: acpiphp: Slot [13] registered May 9 01:40:29.069732 kernel: acpiphp: Slot [14] registered May 9 01:40:29.069741 kernel: acpiphp: Slot [15] registered May 9 01:40:29.069750 kernel: acpiphp: Slot [16] registered May 9 01:40:29.069759 kernel: acpiphp: Slot [17] registered May 9 01:40:29.069768 kernel: acpiphp: Slot [18] registered May 9 01:40:29.069779 kernel: acpiphp: Slot [19] registered May 9 01:40:29.069806 kernel: acpiphp: Slot [20] registered May 9 01:40:29.069815 kernel: acpiphp: Slot [21] registered May 9 01:40:29.069824 kernel: acpiphp: Slot [22] registered May 9 01:40:29.069833 kernel: acpiphp: Slot [23] registered May 9 01:40:29.069842 kernel: acpiphp: Slot [24] registered May 9 01:40:29.069851 kernel: acpiphp: Slot [25] registered May 9 01:40:29.069860 kernel: acpiphp: Slot [26] registered May 9 01:40:29.069868 kernel: acpiphp: Slot [27] registered May 9 01:40:29.069880 kernel: acpiphp: Slot [28] registered May 9 01:40:29.069889 kernel: acpiphp: Slot [29] registered May 9 01:40:29.069897 kernel: acpiphp: Slot [30] registered May 9 01:40:29.069906 kernel: acpiphp: Slot [31] registered May 9 01:40:29.069915 kernel: PCI host bridge to bus 0000:00 May 9 01:40:29.070022 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 9 01:40:29.070111 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 9 01:40:29.070198 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 9 01:40:29.070306 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 9 01:40:29.070394 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 9 01:40:29.070480 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 9 01:40:29.070598 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 9 01:40:29.070705 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 9 01:40:29.070869 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 9 01:40:29.070986 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 9 01:40:29.071111 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 9 01:40:29.071209 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 9 01:40:29.071306 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 9 01:40:29.071413 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 9 01:40:29.071533 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 9 01:40:29.071631 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 9 01:40:29.071735 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 9 01:40:29.071895 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 9 01:40:29.071997 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 9 01:40:29.072104 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 9 01:40:29.072200 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 9 01:40:29.072295 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 9 01:40:29.072391 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 9 01:40:29.072502 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 9 01:40:29.072600 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 9 01:40:29.072697 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 9 01:40:29.072822 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 9 01:40:29.072926 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 9 01:40:29.073037 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 9 01:40:29.073134 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 9 01:40:29.073235 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 9 01:40:29.073331 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 9 01:40:29.073434 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 9 01:40:29.073531 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 9 01:40:29.073627 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 9 01:40:29.073732 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 9 01:40:29.073884 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 9 01:40:29.074031 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 9 01:40:29.074160 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 9 01:40:29.074175 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 9 01:40:29.074184 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 9 01:40:29.074194 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 9 01:40:29.074203 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 9 01:40:29.074212 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 9 01:40:29.074221 kernel: iommu: Default domain type: Translated May 9 01:40:29.074235 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 9 01:40:29.074244 kernel: PCI: Using ACPI for IRQ routing May 9 01:40:29.074253 kernel: PCI: pci_cache_line_size set to 64 bytes May 9 01:40:29.074262 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 9 01:40:29.074271 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 9 01:40:29.074380 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 9 01:40:29.074475 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 9 01:40:29.074569 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 9 01:40:29.074582 kernel: vgaarb: loaded May 9 01:40:29.074595 kernel: clocksource: Switched to clocksource kvm-clock May 9 01:40:29.074604 kernel: VFS: Disk quotas dquot_6.6.0 May 9 01:40:29.074613 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 01:40:29.074622 kernel: pnp: PnP ACPI init May 9 01:40:29.074719 kernel: pnp 00:03: [dma 2] May 9 01:40:29.074734 kernel: pnp: PnP ACPI: found 5 devices May 9 01:40:29.074744 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 9 01:40:29.074753 kernel: NET: Registered PF_INET protocol family May 9 01:40:29.074765 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 01:40:29.074775 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 01:40:29.074833 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 01:40:29.074845 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 01:40:29.074854 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 01:40:29.074864 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 01:40:29.074873 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 01:40:29.074882 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 01:40:29.074891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 01:40:29.074904 kernel: NET: Registered PF_XDP protocol family May 9 01:40:29.075022 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 9 01:40:29.075116 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 9 01:40:29.075198 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 9 01:40:29.075279 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 9 01:40:29.075360 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 9 01:40:29.075453 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 9 01:40:29.075550 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 9 01:40:29.075568 kernel: PCI: CLS 0 bytes, default 64 May 9 01:40:29.075577 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 9 01:40:29.075586 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 9 01:40:29.075596 kernel: Initialise system trusted keyrings May 9 01:40:29.075605 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 01:40:29.075614 kernel: Key type asymmetric registered May 9 01:40:29.075623 kernel: Asymmetric key parser 'x509' registered May 9 01:40:29.075632 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 9 01:40:29.075641 kernel: io scheduler mq-deadline registered May 9 01:40:29.075653 kernel: io scheduler kyber registered May 9 01:40:29.075661 kernel: io scheduler bfq registered May 9 01:40:29.075671 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 9 01:40:29.075680 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 9 01:40:29.075690 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 9 01:40:29.075699 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 9 01:40:29.075708 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 9 01:40:29.075717 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 01:40:29.075727 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 9 01:40:29.075738 kernel: random: crng init done May 9 01:40:29.075747 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 9 01:40:29.075756 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 9 01:40:29.075765 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 9 01:40:29.075893 kernel: rtc_cmos 00:04: RTC can wake from S4 May 9 01:40:29.075909 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 9 01:40:29.075993 kernel: rtc_cmos 00:04: registered as rtc0 May 9 01:40:29.076146 kernel: rtc_cmos 00:04: setting system clock to 2025-05-09T01:40:28 UTC (1746754828) May 9 01:40:29.076239 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 9 01:40:29.076253 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 9 01:40:29.076262 kernel: NET: Registered PF_INET6 protocol family May 9 01:40:29.076271 kernel: Segment Routing with IPv6 May 9 01:40:29.076280 kernel: In-situ OAM (IOAM) with IPv6 May 9 01:40:29.076289 kernel: NET: Registered PF_PACKET protocol family May 9 01:40:29.076298 kernel: Key type dns_resolver registered May 9 01:40:29.076307 kernel: IPI shorthand broadcast: enabled May 9 01:40:29.076316 kernel: sched_clock: Marking stable (984007711, 171379910)->(1179517345, -24129724) May 9 01:40:29.076329 kernel: registered taskstats version 1 May 9 01:40:29.076338 kernel: Loading compiled-in X.509 certificates May 9 01:40:29.076347 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 247aefc84589d8961003173d18a9b4daf28f7c9e' May 9 01:40:29.076356 kernel: Key type .fscrypt registered May 9 01:40:29.076365 kernel: Key type fscrypt-provisioning registered May 9 01:40:29.076374 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 01:40:29.076382 kernel: ima: Allocated hash algorithm: sha1 May 9 01:40:29.076392 kernel: ima: No architecture policies found May 9 01:40:29.076402 kernel: clk: Disabling unused clocks May 9 01:40:29.076411 kernel: Freeing unused kernel image (initmem) memory: 43604K May 9 01:40:29.076420 kernel: Write protecting the kernel read-only data: 40960k May 9 01:40:29.076429 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 9 01:40:29.076438 kernel: Run /init as init process May 9 01:40:29.076447 kernel: with arguments: May 9 01:40:29.076456 kernel: /init May 9 01:40:29.076465 kernel: with environment: May 9 01:40:29.076474 kernel: HOME=/ May 9 01:40:29.076482 kernel: TERM=linux May 9 01:40:29.076493 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 01:40:29.076503 systemd[1]: Successfully made /usr/ read-only. May 9 01:40:29.076516 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 9 01:40:29.076527 systemd[1]: Detected virtualization kvm. May 9 01:40:29.076536 systemd[1]: Detected architecture x86-64. May 9 01:40:29.076546 systemd[1]: Running in initrd. May 9 01:40:29.076555 systemd[1]: No hostname configured, using default hostname. May 9 01:40:29.076567 systemd[1]: Hostname set to . May 9 01:40:29.076577 systemd[1]: Initializing machine ID from VM UUID. May 9 01:40:29.076586 systemd[1]: Queued start job for default target initrd.target. May 9 01:40:29.076596 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 01:40:29.076606 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 01:40:29.076617 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 01:40:29.076635 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 01:40:29.076647 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 01:40:29.076658 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 01:40:29.076669 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 01:40:29.076679 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 01:40:29.076689 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 01:40:29.076701 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 01:40:29.076711 systemd[1]: Reached target paths.target - Path Units. May 9 01:40:29.076721 systemd[1]: Reached target slices.target - Slice Units. May 9 01:40:29.076731 systemd[1]: Reached target swap.target - Swaps. May 9 01:40:29.076740 systemd[1]: Reached target timers.target - Timer Units. May 9 01:40:29.076750 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 01:40:29.076760 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 01:40:29.076770 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 01:40:29.076780 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 9 01:40:29.076828 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 01:40:29.076839 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 01:40:29.076849 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 01:40:29.076859 systemd[1]: Reached target sockets.target - Socket Units. May 9 01:40:29.076869 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 01:40:29.076879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 01:40:29.076889 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 01:40:29.076898 systemd[1]: Starting systemd-fsck-usr.service... May 9 01:40:29.076911 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 01:40:29.076921 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 01:40:29.076931 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 01:40:29.076941 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 01:40:29.076951 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 01:40:29.076962 systemd[1]: Finished systemd-fsck-usr.service. May 9 01:40:29.076994 systemd-journald[185]: Collecting audit messages is disabled. May 9 01:40:29.077020 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 01:40:29.077034 systemd-journald[185]: Journal started May 9 01:40:29.077057 systemd-journald[185]: Runtime Journal (/run/log/journal/63c7ca011bb1474695a444372ad2221e) is 8M, max 78.2M, 70.2M free. May 9 01:40:29.070807 systemd-modules-load[186]: Inserted module 'overlay' May 9 01:40:29.079474 systemd[1]: Started systemd-journald.service - Journal Service. May 9 01:40:29.082936 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 01:40:29.106811 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 01:40:29.106886 kernel: Bridge firewalling registered May 9 01:40:29.106811 systemd-modules-load[186]: Inserted module 'br_netfilter' May 9 01:40:29.132432 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 01:40:29.136234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 01:40:29.138510 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 01:40:29.142467 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 01:40:29.145886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 01:40:29.148141 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 01:40:29.150108 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 01:40:29.163010 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 01:40:29.168163 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 01:40:29.170765 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 01:40:29.180048 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 01:40:29.189902 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 01:40:29.205875 dracut-cmdline[220]: dracut-dracut-053 May 9 01:40:29.207839 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6dbb211661f4d09f7718fdc7eab00f1550a8baafb68f4d2efdaedafa102351ae May 9 01:40:29.224626 systemd-resolved[215]: Positive Trust Anchors: May 9 01:40:29.225319 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 01:40:29.225362 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 01:40:29.231342 systemd-resolved[215]: Defaulting to hostname 'linux'. May 9 01:40:29.232543 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 01:40:29.233834 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 01:40:29.281843 kernel: SCSI subsystem initialized May 9 01:40:29.292841 kernel: Loading iSCSI transport class v2.0-870. May 9 01:40:29.305226 kernel: iscsi: registered transport (tcp) May 9 01:40:29.328063 kernel: iscsi: registered transport (qla4xxx) May 9 01:40:29.328133 kernel: QLogic iSCSI HBA Driver May 9 01:40:29.388859 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 01:40:29.394572 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 01:40:29.456391 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 01:40:29.456506 kernel: device-mapper: uevent: version 1.0.3 May 9 01:40:29.459897 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 01:40:29.521908 kernel: raid6: sse2x4 gen() 5170 MB/s May 9 01:40:29.540879 kernel: raid6: sse2x2 gen() 5617 MB/s May 9 01:40:29.559224 kernel: raid6: sse2x1 gen() 9014 MB/s May 9 01:40:29.559307 kernel: raid6: using algorithm sse2x1 gen() 9014 MB/s May 9 01:40:29.578200 kernel: raid6: .... xor() 7268 MB/s, rmw enabled May 9 01:40:29.578353 kernel: raid6: using ssse3x2 recovery algorithm May 9 01:40:29.600010 kernel: xor: measuring software checksum speed May 9 01:40:29.600078 kernel: prefetch64-sse : 18499 MB/sec May 9 01:40:29.603456 kernel: generic_sse : 15283 MB/sec May 9 01:40:29.603509 kernel: xor: using function: prefetch64-sse (18499 MB/sec) May 9 01:40:29.781270 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 01:40:29.800740 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 01:40:29.805289 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 01:40:29.839302 systemd-udevd[403]: Using default interface naming scheme 'v255'. May 9 01:40:29.844315 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 01:40:29.851078 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 01:40:29.881197 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation May 9 01:40:29.926938 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 01:40:29.933444 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 01:40:29.992375 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 01:40:30.000921 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 01:40:30.047524 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 01:40:30.049770 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 01:40:30.052119 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 01:40:30.052602 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 01:40:30.056475 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 01:40:30.077444 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 01:40:30.092652 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 9 01:40:30.097844 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 9 01:40:30.118158 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 01:40:30.118215 kernel: GPT:17805311 != 20971519 May 9 01:40:30.118228 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 01:40:30.119601 kernel: GPT:17805311 != 20971519 May 9 01:40:30.120841 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 01:40:30.123811 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 01:40:30.134908 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 01:40:30.135804 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 01:40:30.137450 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 01:40:30.138674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 01:40:30.138771 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 01:40:30.140300 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 01:40:30.145814 kernel: libata version 3.00 loaded. May 9 01:40:30.145978 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 01:40:30.148317 kernel: ata_piix 0000:00:01.1: version 2.13 May 9 01:40:30.149564 kernel: scsi host0: ata_piix May 9 01:40:30.150019 kernel: scsi host1: ata_piix May 9 01:40:30.150470 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 9 01:40:30.150624 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 9 01:40:30.164170 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 9 01:40:30.181856 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (456) May 9 01:40:30.187864 kernel: BTRFS: device fsid d4537cc2-bda5-4424-8730-1f8e8c76a79a devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (472) May 9 01:40:30.214551 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 9 01:40:30.246573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 01:40:30.259755 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 9 01:40:30.286028 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 9 01:40:30.286691 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 9 01:40:30.300374 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 01:40:30.303078 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 01:40:30.308745 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 01:40:30.330853 disk-uuid[509]: Primary Header is updated. May 9 01:40:30.330853 disk-uuid[509]: Secondary Entries is updated. May 9 01:40:30.330853 disk-uuid[509]: Secondary Header is updated. May 9 01:40:30.340241 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 01:40:30.339847 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 01:40:30.345843 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 01:40:31.359955 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 9 01:40:31.362648 disk-uuid[514]: The operation has completed successfully. May 9 01:40:31.445274 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 01:40:31.445405 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 01:40:31.493697 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 01:40:31.514413 sh[529]: Success May 9 01:40:31.535822 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 9 01:40:31.602061 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 01:40:31.606861 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 01:40:31.619413 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 01:40:31.632836 kernel: BTRFS info (device dm-0): first mount of filesystem d4537cc2-bda5-4424-8730-1f8e8c76a79a May 9 01:40:31.632920 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 9 01:40:31.632950 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 01:40:31.634208 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 01:40:31.635859 kernel: BTRFS info (device dm-0): using free space tree May 9 01:40:31.653738 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 01:40:31.655707 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 01:40:31.658059 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 01:40:31.663047 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 01:40:31.711837 kernel: BTRFS info (device vda6): first mount of filesystem 2d988641-706e-44d5-976c-175654fd684c May 9 01:40:31.711924 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 01:40:31.711952 kernel: BTRFS info (device vda6): using free space tree May 9 01:40:31.723853 kernel: BTRFS info (device vda6): auto enabling async discard May 9 01:40:31.732871 kernel: BTRFS info (device vda6): last unmount of filesystem 2d988641-706e-44d5-976c-175654fd684c May 9 01:40:31.744757 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 01:40:31.748018 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 01:40:31.870491 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 01:40:31.873929 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 01:40:31.918163 ignition[627]: Ignition 2.20.0 May 9 01:40:31.918181 ignition[627]: Stage: fetch-offline May 9 01:40:31.918230 ignition[627]: no configs at "/usr/lib/ignition/base.d" May 9 01:40:31.918242 ignition[627]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:40:31.920182 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 01:40:31.918362 ignition[627]: parsed url from cmdline: "" May 9 01:40:31.918366 ignition[627]: no config URL provided May 9 01:40:31.918372 ignition[627]: reading system config file "/usr/lib/ignition/user.ign" May 9 01:40:31.918380 ignition[627]: no config at "/usr/lib/ignition/user.ign" May 9 01:40:31.918388 ignition[627]: failed to fetch config: resource requires networking May 9 01:40:31.918739 ignition[627]: Ignition finished successfully May 9 01:40:31.933921 systemd-networkd[709]: lo: Link UP May 9 01:40:31.933932 systemd-networkd[709]: lo: Gained carrier May 9 01:40:31.935190 systemd-networkd[709]: Enumeration completed May 9 01:40:31.935392 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 01:40:31.935948 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 01:40:31.935952 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 01:40:31.936949 systemd[1]: Reached target network.target - Network. May 9 01:40:31.937129 systemd-networkd[709]: eth0: Link UP May 9 01:40:31.937132 systemd-networkd[709]: eth0: Gained carrier May 9 01:40:31.937140 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 01:40:31.939016 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 01:40:31.945830 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.153/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 9 01:40:31.964649 ignition[718]: Ignition 2.20.0 May 9 01:40:31.964667 ignition[718]: Stage: fetch May 9 01:40:31.964925 ignition[718]: no configs at "/usr/lib/ignition/base.d" May 9 01:40:31.964938 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:40:31.965037 ignition[718]: parsed url from cmdline: "" May 9 01:40:31.965041 ignition[718]: no config URL provided May 9 01:40:31.965047 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" May 9 01:40:31.965055 ignition[718]: no config at "/usr/lib/ignition/user.ign" May 9 01:40:31.965158 ignition[718]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 9 01:40:31.965182 ignition[718]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 9 01:40:31.965188 ignition[718]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 9 01:40:32.249300 systemd-resolved[215]: Detected conflict on linux IN A 172.24.4.153 May 9 01:40:32.249330 systemd-resolved[215]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. May 9 01:40:32.447678 ignition[718]: GET result: OK May 9 01:40:32.447919 ignition[718]: parsing config with SHA512: e18f0ff807b0bbca0bc29931a3a03bcf8098180b2c587d88da606f763c1209a54c275c04db6687786270bf434b7e0e264d63ba868d905ceaed7c8c8c35cafe38 May 9 01:40:32.459282 unknown[718]: fetched base config from "system" May 9 01:40:32.459310 unknown[718]: fetched base config from "system" May 9 01:40:32.460319 ignition[718]: fetch: fetch complete May 9 01:40:32.459326 unknown[718]: fetched user config from "openstack" May 9 01:40:32.460331 ignition[718]: fetch: fetch passed May 9 01:40:32.464017 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 01:40:32.460421 ignition[718]: Ignition finished successfully May 9 01:40:32.471136 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 01:40:32.513545 ignition[725]: Ignition 2.20.0 May 9 01:40:32.513573 ignition[725]: Stage: kargs May 9 01:40:32.514024 ignition[725]: no configs at "/usr/lib/ignition/base.d" May 9 01:40:32.514053 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:40:32.518599 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 01:40:32.516411 ignition[725]: kargs: kargs passed May 9 01:40:32.516509 ignition[725]: Ignition finished successfully May 9 01:40:32.526024 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 01:40:32.568118 ignition[732]: Ignition 2.20.0 May 9 01:40:32.570599 ignition[732]: Stage: disks May 9 01:40:32.571359 ignition[732]: no configs at "/usr/lib/ignition/base.d" May 9 01:40:32.571388 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:40:32.573756 ignition[732]: disks: disks passed May 9 01:40:32.573890 ignition[732]: Ignition finished successfully May 9 01:40:32.576519 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 01:40:32.579092 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 01:40:32.581076 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 01:40:32.583675 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 01:40:32.586612 systemd[1]: Reached target sysinit.target - System Initialization. May 9 01:40:32.589556 systemd[1]: Reached target basic.target - Basic System. May 9 01:40:32.595312 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 01:40:32.641331 systemd-fsck[740]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 9 01:40:32.654960 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 01:40:32.659515 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 01:40:32.832845 kernel: EXT4-fs (vda9): mounted filesystem 0829e1d9-eacd-4a94-9591-6f579c115eeb r/w with ordered data mode. Quota mode: none. May 9 01:40:32.833771 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 01:40:32.834920 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 01:40:32.839864 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 01:40:32.841494 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 01:40:32.842359 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 01:40:32.849309 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 9 01:40:32.855915 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 01:40:32.855958 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 01:40:32.857634 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 01:40:32.864823 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (748) May 9 01:40:32.874818 kernel: BTRFS info (device vda6): first mount of filesystem 2d988641-706e-44d5-976c-175654fd684c May 9 01:40:32.874845 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 01:40:32.874857 kernel: BTRFS info (device vda6): using free space tree May 9 01:40:32.875569 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 01:40:32.886811 kernel: BTRFS info (device vda6): auto enabling async discard May 9 01:40:32.892054 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 01:40:33.004659 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory May 9 01:40:33.010619 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory May 9 01:40:33.022256 initrd-setup-root[791]: cut: /sysroot/etc/shadow: No such file or directory May 9 01:40:33.028520 initrd-setup-root[798]: cut: /sysroot/etc/gshadow: No such file or directory May 9 01:40:33.122395 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 01:40:33.124387 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 01:40:33.126909 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 01:40:33.138652 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 01:40:33.141527 kernel: BTRFS info (device vda6): last unmount of filesystem 2d988641-706e-44d5-976c-175654fd684c May 9 01:40:33.147033 systemd-networkd[709]: eth0: Gained IPv6LL May 9 01:40:33.160616 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 01:40:33.167427 ignition[866]: INFO : Ignition 2.20.0 May 9 01:40:33.169643 ignition[866]: INFO : Stage: mount May 9 01:40:33.169643 ignition[866]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 01:40:33.169643 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:40:33.169643 ignition[866]: INFO : mount: mount passed May 9 01:40:33.169643 ignition[866]: INFO : Ignition finished successfully May 9 01:40:33.172173 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 01:40:40.081467 coreos-metadata[750]: May 09 01:40:40.081 WARN failed to locate config-drive, using the metadata service API instead May 9 01:40:40.124509 coreos-metadata[750]: May 09 01:40:40.124 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 9 01:40:40.141193 coreos-metadata[750]: May 09 01:40:40.141 INFO Fetch successful May 9 01:40:40.143318 coreos-metadata[750]: May 09 01:40:40.143 INFO wrote hostname ci-4284-0-0-n-bbb05de7dc.novalocal to /sysroot/etc/hostname May 9 01:40:40.146282 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 9 01:40:40.146499 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 9 01:40:40.155001 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 01:40:40.186338 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 01:40:40.219891 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (882) May 9 01:40:40.227244 kernel: BTRFS info (device vda6): first mount of filesystem 2d988641-706e-44d5-976c-175654fd684c May 9 01:40:40.227309 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 9 01:40:40.231485 kernel: BTRFS info (device vda6): using free space tree May 9 01:40:40.242894 kernel: BTRFS info (device vda6): auto enabling async discard May 9 01:40:40.248722 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 01:40:40.294550 ignition[900]: INFO : Ignition 2.20.0 May 9 01:40:40.294550 ignition[900]: INFO : Stage: files May 9 01:40:40.299093 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 01:40:40.299093 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:40:40.299093 ignition[900]: DEBUG : files: compiled without relabeling support, skipping May 9 01:40:40.304494 ignition[900]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 01:40:40.304494 ignition[900]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 01:40:40.314477 ignition[900]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 01:40:40.316526 ignition[900]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 01:40:40.316526 ignition[900]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 01:40:40.315395 unknown[900]: wrote ssh authorized keys file for user: core May 9 01:40:40.321728 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 01:40:40.321728 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 9 01:40:40.392284 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 01:40:40.681133 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 9 01:40:40.681133 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 01:40:40.685969 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 9 01:40:41.946538 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 9 01:40:43.556422 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 9 01:40:43.557929 ignition[900]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 9 01:40:43.559848 ignition[900]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 01:40:43.559848 ignition[900]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 01:40:43.559848 ignition[900]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 9 01:40:43.559848 ignition[900]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 9 01:40:43.570784 ignition[900]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 9 01:40:43.570784 ignition[900]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 01:40:43.570784 ignition[900]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 01:40:43.570784 ignition[900]: INFO : files: files passed May 9 01:40:43.570784 ignition[900]: INFO : Ignition finished successfully May 9 01:40:43.561444 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 01:40:43.566935 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 01:40:43.571319 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 01:40:43.592672 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 01:40:43.594834 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 01:40:43.594834 initrd-setup-root-after-ignition[928]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 01:40:43.592772 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 01:40:43.597735 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 01:40:43.598751 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 01:40:43.601755 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 01:40:43.604892 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 01:40:43.672551 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 01:40:43.674100 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 01:40:43.677288 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 01:40:43.677899 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 01:40:43.680071 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 01:40:43.681890 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 01:40:43.708728 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 01:40:43.712901 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 01:40:43.739337 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 01:40:43.740761 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 01:40:43.741473 systemd[1]: Stopped target timers.target - Timer Units. May 9 01:40:43.743693 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 01:40:43.743841 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 01:40:43.746307 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 01:40:43.747398 systemd[1]: Stopped target basic.target - Basic System. May 9 01:40:43.749557 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 01:40:43.751334 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 01:40:43.753120 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 01:40:43.755284 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 01:40:43.757365 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 01:40:43.759636 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 01:40:43.761732 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 01:40:43.763946 systemd[1]: Stopped target swap.target - Swaps. May 9 01:40:43.765922 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 01:40:43.766061 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 01:40:43.768410 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 01:40:43.769537 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 01:40:43.771291 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 01:40:43.772178 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 01:40:43.773450 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 01:40:43.773568 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 01:40:43.776774 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 01:40:43.776922 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 01:40:43.777937 systemd[1]: ignition-files.service: Deactivated successfully. May 9 01:40:43.778048 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 01:40:43.782994 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 01:40:43.788009 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 01:40:43.788876 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 01:40:43.789071 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 01:40:43.791014 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 01:40:43.791177 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 01:40:43.801771 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 01:40:43.803064 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 01:40:43.815376 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 01:40:43.816639 ignition[953]: INFO : Ignition 2.20.0 May 9 01:40:43.817483 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 01:40:43.817596 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 01:40:43.820374 ignition[953]: INFO : Stage: umount May 9 01:40:43.820374 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 01:40:43.820374 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 9 01:40:43.823774 ignition[953]: INFO : umount: umount passed May 9 01:40:43.823774 ignition[953]: INFO : Ignition finished successfully May 9 01:40:43.823208 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 01:40:43.823305 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 01:40:43.824724 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 01:40:43.825331 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 01:40:43.825966 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 01:40:43.826035 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 01:40:43.826953 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 01:40:43.826997 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 01:40:43.827955 systemd[1]: Stopped target network.target - Network. May 9 01:40:43.828870 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 01:40:43.828918 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 01:40:43.829934 systemd[1]: Stopped target paths.target - Path Units. May 9 01:40:43.830868 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 01:40:43.836847 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 01:40:43.837410 systemd[1]: Stopped target slices.target - Slice Units. May 9 01:40:43.838604 systemd[1]: Stopped target sockets.target - Socket Units. May 9 01:40:43.839572 systemd[1]: iscsid.socket: Deactivated successfully. May 9 01:40:43.839607 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 01:40:43.840552 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 01:40:43.840584 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 01:40:43.841527 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 01:40:43.841570 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 01:40:43.842506 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 01:40:43.842546 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 01:40:43.843486 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 01:40:43.843529 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 01:40:43.844582 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 01:40:43.845744 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 01:40:43.848234 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 01:40:43.848337 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 01:40:43.851683 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 9 01:40:43.853022 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 01:40:43.853099 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 01:40:43.855531 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 9 01:40:43.855749 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 01:40:43.855909 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 01:40:43.861766 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 9 01:40:43.862272 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 01:40:43.862443 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 01:40:43.864869 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 01:40:43.865389 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 01:40:43.865439 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 01:40:43.867139 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 01:40:43.867185 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 01:40:43.868938 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 01:40:43.868982 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 01:40:43.870493 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 01:40:43.873027 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 9 01:40:43.881085 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 01:40:43.881218 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 01:40:43.882341 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 01:40:43.882398 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 01:40:43.883360 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 01:40:43.883390 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 01:40:43.884529 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 01:40:43.884574 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 01:40:43.886171 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 01:40:43.886213 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 01:40:43.887570 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 01:40:43.887614 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 01:40:43.890921 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 01:40:43.891716 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 01:40:43.891772 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 01:40:43.895940 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 01:40:43.895989 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 01:40:43.897031 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 01:40:43.897073 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 01:40:43.898184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 01:40:43.898227 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 01:40:43.902309 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 01:40:43.902395 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 01:40:43.906778 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 01:40:43.906915 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 01:40:43.908409 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 01:40:43.910944 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 01:40:43.932072 systemd[1]: Switching root. May 9 01:40:43.969975 systemd-journald[185]: Journal stopped May 9 01:40:45.646680 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). May 9 01:40:45.646745 kernel: SELinux: policy capability network_peer_controls=1 May 9 01:40:45.646767 kernel: SELinux: policy capability open_perms=1 May 9 01:40:45.646779 kernel: SELinux: policy capability extended_socket_class=1 May 9 01:40:45.646844 kernel: SELinux: policy capability always_check_network=0 May 9 01:40:45.646863 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 01:40:45.646875 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 01:40:45.646886 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 01:40:45.646897 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 01:40:45.646908 kernel: audit: type=1403 audit(1746754844.558:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 01:40:45.646922 systemd[1]: Successfully loaded SELinux policy in 72.739ms. May 9 01:40:45.646950 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.775ms. May 9 01:40:45.646964 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 9 01:40:45.646978 systemd[1]: Detected virtualization kvm. May 9 01:40:45.646995 systemd[1]: Detected architecture x86-64. May 9 01:40:45.647007 systemd[1]: Detected first boot. May 9 01:40:45.647020 systemd[1]: Hostname set to . May 9 01:40:45.647032 systemd[1]: Initializing machine ID from VM UUID. May 9 01:40:45.647045 zram_generator::config[999]: No configuration found. May 9 01:40:45.647059 kernel: Guest personality initialized and is inactive May 9 01:40:45.647071 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 9 01:40:45.647082 kernel: Initialized host personality May 9 01:40:45.647093 kernel: NET: Registered PF_VSOCK protocol family May 9 01:40:45.647105 systemd[1]: Populated /etc with preset unit settings. May 9 01:40:45.647118 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 9 01:40:45.647130 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 01:40:45.647143 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 01:40:45.647155 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 01:40:45.647169 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 01:40:45.647181 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 01:40:45.647193 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 01:40:45.647205 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 01:40:45.647217 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 01:40:45.647230 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 01:40:45.647243 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 01:40:45.647255 systemd[1]: Created slice user.slice - User and Session Slice. May 9 01:40:45.647269 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 01:40:45.647284 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 01:40:45.647297 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 01:40:45.647310 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 01:40:45.647323 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 01:40:45.647336 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 01:40:45.647350 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 01:40:45.647362 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 01:40:45.647374 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 01:40:45.647386 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 01:40:45.647398 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 01:40:45.647411 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 01:40:45.647423 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 01:40:45.647435 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 01:40:45.647447 systemd[1]: Reached target slices.target - Slice Units. May 9 01:40:45.647458 systemd[1]: Reached target swap.target - Swaps. May 9 01:40:45.647473 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 01:40:45.647485 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 01:40:45.647497 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 9 01:40:45.647509 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 01:40:45.647521 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 01:40:45.647533 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 01:40:45.647545 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 01:40:45.647558 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 01:40:45.647570 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 01:40:45.647584 systemd[1]: Mounting media.mount - External Media Directory... May 9 01:40:45.647598 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 01:40:45.647610 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 01:40:45.647622 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 01:40:45.647635 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 01:40:45.647648 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 01:40:45.647660 systemd[1]: Reached target machines.target - Containers. May 9 01:40:45.647673 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 01:40:45.647687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 01:40:45.647699 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 01:40:45.647711 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 01:40:45.647724 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 01:40:45.647736 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 01:40:45.647748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 01:40:45.647760 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 01:40:45.647773 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 01:40:45.649020 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 01:40:45.649038 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 01:40:45.649051 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 01:40:45.649063 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 01:40:45.649075 systemd[1]: Stopped systemd-fsck-usr.service. May 9 01:40:45.649088 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 01:40:45.649101 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 01:40:45.649113 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 01:40:45.649125 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 01:40:45.649141 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 01:40:45.649153 kernel: loop: module loaded May 9 01:40:45.649165 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 9 01:40:45.649177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 01:40:45.649190 systemd[1]: verity-setup.service: Deactivated successfully. May 9 01:40:45.649202 systemd[1]: Stopped verity-setup.service. May 9 01:40:45.649216 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 01:40:45.649228 kernel: fuse: init (API version 7.39) May 9 01:40:45.649240 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 01:40:45.649252 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 01:40:45.649267 systemd[1]: Mounted media.mount - External Media Directory. May 9 01:40:45.649280 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 01:40:45.649292 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 01:40:45.649304 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 01:40:45.649316 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 01:40:45.649328 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 01:40:45.649340 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 01:40:45.649352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 01:40:45.649364 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 01:40:45.649378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 01:40:45.649390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 01:40:45.649420 systemd-journald[1086]: Collecting audit messages is disabled. May 9 01:40:45.649446 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 01:40:45.649459 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 01:40:45.649472 systemd-journald[1086]: Journal started May 9 01:40:45.649500 systemd-journald[1086]: Runtime Journal (/run/log/journal/63c7ca011bb1474695a444372ad2221e) is 8M, max 78.2M, 70.2M free. May 9 01:40:45.297760 systemd[1]: Queued start job for default target multi-user.target. May 9 01:40:45.306066 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 9 01:40:45.306567 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 01:40:45.654531 systemd[1]: Started systemd-journald.service - Journal Service. May 9 01:40:45.654726 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 01:40:45.654960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 01:40:45.656991 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 01:40:45.657721 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 01:40:45.659081 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 01:40:45.660485 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 9 01:40:45.677049 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 01:40:45.679747 kernel: ACPI: bus type drm_connector registered May 9 01:40:45.689022 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 01:40:45.694102 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 01:40:45.696868 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 01:40:45.696910 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 01:40:45.699251 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 9 01:40:45.702417 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 01:40:45.707328 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 01:40:45.714033 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 01:40:45.718104 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 01:40:45.720260 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 01:40:45.722807 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 01:40:45.727676 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 01:40:45.731880 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 01:40:45.736048 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 01:40:45.739982 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 01:40:45.748968 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 01:40:45.754269 systemd-journald[1086]: Time spent on flushing to /var/log/journal/63c7ca011bb1474695a444372ad2221e is 41.281ms for 954 entries. May 9 01:40:45.754269 systemd-journald[1086]: System Journal (/var/log/journal/63c7ca011bb1474695a444372ad2221e) is 8M, max 584.8M, 576.8M free. May 9 01:40:45.821881 systemd-journald[1086]: Received client request to flush runtime journal. May 9 01:40:45.821924 kernel: loop0: detected capacity change from 0 to 151640 May 9 01:40:45.757077 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 01:40:45.758341 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 01:40:45.758699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 01:40:45.760019 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 01:40:45.760666 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 01:40:45.761726 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 01:40:45.778066 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 01:40:45.780975 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 01:40:45.782412 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 01:40:45.796662 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 9 01:40:45.799120 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 01:40:45.817960 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 01:40:45.827935 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 01:40:45.852137 udevadm[1148]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 01:40:45.866014 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 01:40:45.886495 systemd-tmpfiles[1137]: ACLs are not supported, ignoring. May 9 01:40:45.896097 kernel: loop1: detected capacity change from 0 to 8 May 9 01:40:45.886514 systemd-tmpfiles[1137]: ACLs are not supported, ignoring. May 9 01:40:45.896419 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 01:40:45.900902 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 01:40:45.901912 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 9 01:40:45.929836 kernel: loop2: detected capacity change from 0 to 109808 May 9 01:40:45.986588 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 01:40:45.990050 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 01:40:46.007416 kernel: loop3: detected capacity change from 0 to 210664 May 9 01:40:46.019811 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. May 9 01:40:46.020291 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. May 9 01:40:46.028598 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 01:40:46.062820 kernel: loop4: detected capacity change from 0 to 151640 May 9 01:40:46.150848 kernel: loop5: detected capacity change from 0 to 8 May 9 01:40:46.157818 kernel: loop6: detected capacity change from 0 to 109808 May 9 01:40:46.203845 kernel: loop7: detected capacity change from 0 to 210664 May 9 01:40:46.276823 (sd-merge)[1166]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 9 01:40:46.277942 (sd-merge)[1166]: Merged extensions into '/usr'. May 9 01:40:46.284163 systemd[1]: Reload requested from client PID 1136 ('systemd-sysext') (unit systemd-sysext.service)... May 9 01:40:46.284180 systemd[1]: Reloading... May 9 01:40:46.393827 zram_generator::config[1191]: No configuration found. May 9 01:40:46.667893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 01:40:46.716322 ldconfig[1131]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 01:40:46.754446 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 01:40:46.754594 systemd[1]: Reloading finished in 469 ms. May 9 01:40:46.780301 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 01:40:46.781217 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 01:40:46.782004 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 01:40:46.794195 systemd[1]: Starting ensure-sysext.service... May 9 01:40:46.795851 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 01:40:46.799146 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 01:40:46.831661 systemd[1]: Reload requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... May 9 01:40:46.831682 systemd[1]: Reloading... May 9 01:40:46.854606 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 01:40:46.856759 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 01:40:46.859457 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 01:40:46.860469 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 9 01:40:46.860853 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 9 01:40:46.874520 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. May 9 01:40:46.874901 systemd-tmpfiles[1253]: Skipping /boot May 9 01:40:46.875469 systemd-udevd[1254]: Using default interface naming scheme 'v255'. May 9 01:40:46.900462 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. May 9 01:40:46.900846 systemd-tmpfiles[1253]: Skipping /boot May 9 01:40:46.959829 zram_generator::config[1290]: No configuration found. May 9 01:40:47.112012 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1299) May 9 01:40:47.153864 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 9 01:40:47.162824 kernel: ACPI: button: Power Button [PWRF] May 9 01:40:47.189851 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 9 01:40:47.200992 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 01:40:47.276815 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 9 01:40:47.307595 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 01:40:47.308011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 9 01:40:47.308747 systemd[1]: Reloading finished in 476 ms. May 9 01:40:47.318359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 01:40:47.319260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 01:40:47.331850 kernel: mousedev: PS/2 mouse device common for all mice May 9 01:40:47.364830 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 9 01:40:47.369845 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 9 01:40:47.376586 kernel: Console: switching to colour dummy device 80x25 May 9 01:40:47.378407 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 9 01:40:47.378446 kernel: [drm] features: -context_init May 9 01:40:47.380306 kernel: [drm] number of scanouts: 1 May 9 01:40:47.380343 kernel: [drm] number of cap sets: 0 May 9 01:40:47.386825 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 9 01:40:47.389823 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 9 01:40:47.396910 kernel: Console: switching to colour frame buffer device 160x50 May 9 01:40:47.405840 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 9 01:40:47.415065 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 01:40:47.416907 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 01:40:47.437893 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 01:40:47.439042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 01:40:47.442079 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 01:40:47.445258 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 01:40:47.453896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 01:40:47.459379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 01:40:47.459735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 01:40:47.466094 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 01:40:47.466192 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 9 01:40:47.473009 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 01:40:47.477250 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 01:40:47.484254 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 01:40:47.490047 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 01:40:47.498508 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 01:40:47.499509 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 9 01:40:47.512288 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 01:40:47.512805 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 01:40:47.516644 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 01:40:47.517154 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 01:40:47.519757 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 01:40:47.520555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 01:40:47.521816 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 01:40:47.522264 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 01:40:47.526400 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 01:40:47.539151 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 01:40:47.540125 systemd[1]: Finished ensure-sysext.service. May 9 01:40:47.559365 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 01:40:47.564116 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 01:40:47.565135 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 01:40:47.565205 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 01:40:47.568018 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 9 01:40:47.574094 augenrules[1414]: No rules May 9 01:40:47.582404 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 01:40:47.589736 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 01:40:47.592657 systemd[1]: audit-rules.service: Deactivated successfully. May 9 01:40:47.596998 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 01:40:47.598197 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 01:40:47.620469 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 01:40:47.632710 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 01:40:47.642961 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 01:40:47.648458 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 01:40:47.656533 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 01:40:47.663736 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 01:40:47.666578 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 01:40:47.671378 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 01:40:47.697422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 01:40:47.698570 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 01:40:47.729207 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 01:40:47.790265 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 9 01:40:47.793424 systemd[1]: Reached target time-set.target - System Time Set. May 9 01:40:47.805025 systemd-resolved[1393]: Positive Trust Anchors: May 9 01:40:47.805345 systemd-resolved[1393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 01:40:47.805394 systemd-resolved[1393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 01:40:47.812381 systemd-resolved[1393]: Using system hostname 'ci-4284-0-0-n-bbb05de7dc.novalocal'. May 9 01:40:47.815291 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 01:40:47.815984 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 01:40:47.816432 systemd[1]: Reached target sysinit.target - System Initialization. May 9 01:40:47.816830 systemd-networkd[1388]: lo: Link UP May 9 01:40:47.817073 systemd-networkd[1388]: lo: Gained carrier May 9 01:40:47.818295 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 01:40:47.818674 systemd-networkd[1388]: Enumeration completed May 9 01:40:47.818830 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 01:40:47.819086 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 01:40:47.819091 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 01:40:47.819704 systemd-networkd[1388]: eth0: Link UP May 9 01:40:47.819708 systemd-networkd[1388]: eth0: Gained carrier May 9 01:40:47.819723 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 01:40:47.821153 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 01:40:47.821649 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 01:40:47.822082 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 01:40:47.822518 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 01:40:47.822555 systemd[1]: Reached target paths.target - Path Units. May 9 01:40:47.824643 systemd[1]: Reached target timers.target - Timer Units. May 9 01:40:47.830841 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 01:40:47.832777 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 01:40:47.833148 systemd-networkd[1388]: eth0: DHCPv4 address 172.24.4.153/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 9 01:40:47.840383 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 9 01:40:47.840824 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 9 01:40:47.841675 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 9 01:40:47.842208 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 9 01:40:47.854465 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 01:40:47.856237 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 9 01:40:47.858490 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 01:40:47.862249 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 01:40:47.864557 systemd[1]: Reached target network.target - Network. May 9 01:40:47.866559 systemd[1]: Reached target sockets.target - Socket Units. May 9 01:40:47.868671 systemd[1]: Reached target basic.target - Basic System. May 9 01:40:47.870824 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 01:40:47.870935 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 01:40:47.872253 systemd[1]: Starting containerd.service - containerd container runtime... May 9 01:40:47.884367 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 9 01:40:47.888632 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 01:40:47.897309 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 01:40:47.904093 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 01:40:47.904832 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 01:40:47.912015 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 01:40:47.916494 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 01:40:47.925056 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 01:40:47.931887 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 01:40:47.944997 jq[1447]: false May 9 01:40:47.944712 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 01:40:47.952290 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 9 01:40:47.952827 extend-filesystems[1450]: Found loop4 May 9 01:40:47.952827 extend-filesystems[1450]: Found loop5 May 9 01:40:47.952827 extend-filesystems[1450]: Found loop6 May 9 01:40:47.952827 extend-filesystems[1450]: Found loop7 May 9 01:40:47.952827 extend-filesystems[1450]: Found vda May 9 01:40:47.952827 extend-filesystems[1450]: Found vda1 May 9 01:40:47.965891 extend-filesystems[1450]: Found vda2 May 9 01:40:47.965891 extend-filesystems[1450]: Found vda3 May 9 01:40:47.965891 extend-filesystems[1450]: Found usr May 9 01:40:47.965891 extend-filesystems[1450]: Found vda4 May 9 01:40:47.965891 extend-filesystems[1450]: Found vda6 May 9 01:40:47.965891 extend-filesystems[1450]: Found vda7 May 9 01:40:47.965891 extend-filesystems[1450]: Found vda9 May 9 01:40:47.965891 extend-filesystems[1450]: Checking size of /dev/vda9 May 9 01:40:47.958649 dbus-daemon[1446]: [system] SELinux support is enabled May 9 01:40:47.970976 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 01:40:47.976875 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 01:40:47.987553 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 01:40:47.994160 systemd[1]: Starting update-engine.service - Update Engine... May 9 01:40:48.004130 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 01:40:48.005476 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 01:40:48.006051 extend-filesystems[1450]: Resized partition /dev/vda9 May 9 01:40:48.019696 extend-filesystems[1474]: resize2fs 1.47.2 (1-Jan-2025) May 9 01:40:48.025195 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 01:40:48.025457 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 01:40:48.025730 systemd[1]: motdgen.service: Deactivated successfully. May 9 01:40:48.025973 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 01:40:48.037939 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 01:40:48.038448 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 01:40:48.049830 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 9 01:40:48.053458 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 01:40:48.053504 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 01:40:48.060344 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 01:40:48.060378 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 01:40:48.088812 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 9 01:40:48.143702 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1304) May 9 01:40:48.143872 update_engine[1469]: I20250509 01:40:48.111987 1469 main.cc:92] Flatcar Update Engine starting May 9 01:40:48.143872 update_engine[1469]: I20250509 01:40:48.125479 1469 update_check_scheduler.cc:74] Next update check in 8m14s May 9 01:40:48.111925 systemd-logind[1458]: New seat seat0. May 9 01:40:48.144420 tar[1476]: linux-amd64/helm May 9 01:40:48.121764 systemd[1]: Started update-engine.service - Update Engine. May 9 01:40:48.144677 jq[1472]: true May 9 01:40:48.127551 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 01:40:48.133156 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 9 01:40:48.139244 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 01:40:48.145358 jq[1484]: true May 9 01:40:48.148814 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) May 9 01:40:48.148839 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 9 01:40:48.158956 systemd[1]: Started systemd-logind.service - User Login Management. May 9 01:40:48.165405 extend-filesystems[1474]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 9 01:40:48.165405 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 01:40:48.165405 extend-filesystems[1474]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 9 01:40:48.181958 extend-filesystems[1450]: Resized filesystem in /dev/vda9 May 9 01:40:48.171318 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 01:40:48.171536 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 01:40:48.397355 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 01:40:48.554639 bash[1507]: Updated "/home/core/.ssh/authorized_keys" May 9 01:40:48.555838 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 01:40:48.569255 systemd[1]: Starting sshkeys.service... May 9 01:40:48.576428 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 01:40:48.598522 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 9 01:40:48.606035 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 9 01:40:48.616402 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 01:40:48.629736 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 01:40:48.660260 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 01:40:48.667443 systemd[1]: Started sshd@0-172.24.4.153:22-172.24.4.1:54750.service - OpenSSH per-connection server daemon (172.24.4.1:54750). May 9 01:40:48.673465 systemd[1]: issuegen.service: Deactivated successfully. May 9 01:40:48.673674 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 01:40:48.681929 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 01:40:48.731891 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 01:40:48.738515 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 01:40:48.744202 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 01:40:48.745022 systemd[1]: Reached target getty.target - Login Prompts. May 9 01:40:48.863968 containerd[1483]: time="2025-05-09T01:40:48Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 9 01:40:48.865470 containerd[1483]: time="2025-05-09T01:40:48.865447041Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 9 01:40:48.889818 containerd[1483]: time="2025-05-09T01:40:48.889762244Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.405µs" May 9 01:40:48.889973 containerd[1483]: time="2025-05-09T01:40:48.889953423Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 9 01:40:48.890039 containerd[1483]: time="2025-05-09T01:40:48.890024466Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 9 01:40:48.890269 containerd[1483]: time="2025-05-09T01:40:48.890250199Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 9 01:40:48.890336 containerd[1483]: time="2025-05-09T01:40:48.890321483Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 9 01:40:48.890407 containerd[1483]: time="2025-05-09T01:40:48.890393368Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 9 01:40:48.890532 containerd[1483]: time="2025-05-09T01:40:48.890512461Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 9 01:40:48.890589 containerd[1483]: time="2025-05-09T01:40:48.890575920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 9 01:40:48.890895 containerd[1483]: time="2025-05-09T01:40:48.890873689Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 9 01:40:48.890959 containerd[1483]: time="2025-05-09T01:40:48.890946184Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 9 01:40:48.891015 containerd[1483]: time="2025-05-09T01:40:48.891001007Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 9 01:40:48.891079 containerd[1483]: time="2025-05-09T01:40:48.891064807Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 9 01:40:48.891217 containerd[1483]: time="2025-05-09T01:40:48.891198898Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 9 01:40:48.891475 containerd[1483]: time="2025-05-09T01:40:48.891457113Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 9 01:40:48.891557 containerd[1483]: time="2025-05-09T01:40:48.891540189Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 9 01:40:48.891617 containerd[1483]: time="2025-05-09T01:40:48.891603207Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 9 01:40:48.891694 containerd[1483]: time="2025-05-09T01:40:48.891680492Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 9 01:40:48.892111 containerd[1483]: time="2025-05-09T01:40:48.892082085Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 9 01:40:48.892235 containerd[1483]: time="2025-05-09T01:40:48.892218230Z" level=info msg="metadata content store policy set" policy=shared May 9 01:40:48.947348 tar[1476]: linux-amd64/LICENSE May 9 01:40:48.947550 tar[1476]: linux-amd64/README.md May 9 01:40:48.985339 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 01:40:49.003516 containerd[1483]: time="2025-05-09T01:40:49.003413978Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 9 01:40:49.003711 containerd[1483]: time="2025-05-09T01:40:49.003528913Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 9 01:40:49.003711 containerd[1483]: time="2025-05-09T01:40:49.003560252Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 9 01:40:49.003711 containerd[1483]: time="2025-05-09T01:40:49.003634461Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 9 01:40:49.003711 containerd[1483]: time="2025-05-09T01:40:49.003660791Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 9 01:40:49.003711 containerd[1483]: time="2025-05-09T01:40:49.003682231Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 9 01:40:49.003711 containerd[1483]: time="2025-05-09T01:40:49.003706116Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 9 01:40:49.004019 containerd[1483]: time="2025-05-09T01:40:49.003735110Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 9 01:40:49.004019 containerd[1483]: time="2025-05-09T01:40:49.003757362Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 9 01:40:49.004019 containerd[1483]: time="2025-05-09T01:40:49.003778992Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 9 01:40:49.004019 containerd[1483]: time="2025-05-09T01:40:49.003836981Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 9 01:40:49.004019 containerd[1483]: time="2025-05-09T01:40:49.003861958Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 9 01:40:49.004211 containerd[1483]: time="2025-05-09T01:40:49.004133768Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 9 01:40:49.004253 containerd[1483]: time="2025-05-09T01:40:49.004202346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 9 01:40:49.004292 containerd[1483]: time="2025-05-09T01:40:49.004245187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 9 01:40:49.004292 containerd[1483]: time="2025-05-09T01:40:49.004275654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 9 01:40:49.004451 containerd[1483]: time="2025-05-09T01:40:49.004303005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 9 01:40:49.004451 containerd[1483]: time="2025-05-09T01:40:49.004330617Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 9 01:40:49.004451 containerd[1483]: time="2025-05-09T01:40:49.004359110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 9 01:40:49.004451 containerd[1483]: time="2025-05-09T01:40:49.004393234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 9 01:40:49.004451 containerd[1483]: time="2025-05-09T01:40:49.004422900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 9 01:40:49.004645 containerd[1483]: time="2025-05-09T01:40:49.004450893Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 9 01:40:49.004645 containerd[1483]: time="2025-05-09T01:40:49.004481811Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 9 01:40:49.004645 containerd[1483]: time="2025-05-09T01:40:49.004612846Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 9 01:40:49.004757 containerd[1483]: time="2025-05-09T01:40:49.004647391Z" level=info msg="Start snapshots syncer" May 9 01:40:49.004757 containerd[1483]: time="2025-05-09T01:40:49.004732481Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 9 01:40:49.005700 containerd[1483]: time="2025-05-09T01:40:49.005483870Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 9 01:40:49.005700 containerd[1483]: time="2025-05-09T01:40:49.005649821Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 9 01:40:49.006141 containerd[1483]: time="2025-05-09T01:40:49.005832043Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 9 01:40:49.006141 containerd[1483]: time="2025-05-09T01:40:49.006049901Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 9 01:40:49.006260 containerd[1483]: time="2025-05-09T01:40:49.006108572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 9 01:40:49.006260 containerd[1483]: time="2025-05-09T01:40:49.006173914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 9 01:40:49.006260 containerd[1483]: time="2025-05-09T01:40:49.006203530Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 9 01:40:49.006389 containerd[1483]: time="2025-05-09T01:40:49.006272088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 9 01:40:49.006389 containerd[1483]: time="2025-05-09T01:40:49.006303217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 9 01:40:49.006389 containerd[1483]: time="2025-05-09T01:40:49.006331650Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 9 01:40:49.006501 containerd[1483]: time="2025-05-09T01:40:49.006384459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 9 01:40:49.006501 containerd[1483]: time="2025-05-09T01:40:49.006417441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 9 01:40:49.006501 containerd[1483]: time="2025-05-09T01:40:49.006442478Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 9 01:40:49.006877 containerd[1483]: time="2025-05-09T01:40:49.006535322Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 9 01:40:49.006877 containerd[1483]: time="2025-05-09T01:40:49.006581138Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 9 01:40:49.006877 containerd[1483]: time="2025-05-09T01:40:49.006606315Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 9 01:40:49.006877 containerd[1483]: time="2025-05-09T01:40:49.006633286Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 9 01:40:49.006877 containerd[1483]: time="2025-05-09T01:40:49.006656740Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 9 01:40:49.006877 containerd[1483]: time="2025-05-09T01:40:49.006682879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 9 01:40:49.006877 containerd[1483]: time="2025-05-09T01:40:49.006713606Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 9 01:40:49.006877 containerd[1483]: time="2025-05-09T01:40:49.006751457Z" level=info msg="runtime interface created" May 9 01:40:49.006877 containerd[1483]: time="2025-05-09T01:40:49.006766015Z" level=info msg="created NRI interface" May 9 01:40:49.006877 containerd[1483]: time="2025-05-09T01:40:49.006831768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 9 01:40:49.006877 containerd[1483]: time="2025-05-09T01:40:49.006868367Z" level=info msg="Connect containerd service" May 9 01:40:49.007837 containerd[1483]: time="2025-05-09T01:40:49.006930102Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 01:40:49.008883 containerd[1483]: time="2025-05-09T01:40:49.008765114Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307120018Z" level=info msg="Start subscribing containerd event" May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307261103Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307339801Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307272104Z" level=info msg="Start recovering state" May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307483049Z" level=info msg="Start event monitor" May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307500252Z" level=info msg="Start cni network conf syncer for default" May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307520489Z" level=info msg="Start streaming server" May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307536199Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307545166Z" level=info msg="runtime interface starting up..." May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307551738Z" level=info msg="starting plugins..." May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307566045Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 9 01:40:49.309409 containerd[1483]: time="2025-05-09T01:40:49.307676953Z" level=info msg="containerd successfully booted in 0.444158s" May 9 01:40:49.309848 systemd[1]: Started containerd.service - containerd container runtime. May 9 01:40:49.339055 systemd-networkd[1388]: eth0: Gained IPv6LL May 9 01:40:49.340274 systemd-timesyncd[1415]: Network configuration changed, trying to establish connection. May 9 01:40:49.343272 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 01:40:49.347578 systemd[1]: Reached target network-online.target - Network is Online. May 9 01:40:49.354829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:40:49.363587 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 01:40:49.403088 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 01:40:50.176936 sshd[1533]: Accepted publickey for core from 172.24.4.1 port 54750 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:40:50.181524 sshd-session[1533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:40:50.214577 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 01:40:50.224596 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 01:40:50.241096 systemd-logind[1458]: New session 1 of user core. May 9 01:40:50.273984 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 01:40:50.283167 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 01:40:50.303225 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 01:40:50.306559 systemd-logind[1458]: New session c1 of user core. May 9 01:40:50.467008 systemd[1573]: Queued start job for default target default.target. May 9 01:40:50.473712 systemd[1573]: Created slice app.slice - User Application Slice. May 9 01:40:50.473740 systemd[1573]: Reached target paths.target - Paths. May 9 01:40:50.473778 systemd[1573]: Reached target timers.target - Timers. May 9 01:40:50.479935 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 01:40:50.493028 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 01:40:50.494206 systemd[1573]: Reached target sockets.target - Sockets. May 9 01:40:50.494391 systemd[1573]: Reached target basic.target - Basic System. May 9 01:40:50.494436 systemd[1573]: Reached target default.target - Main User Target. May 9 01:40:50.494464 systemd[1573]: Startup finished in 181ms. May 9 01:40:50.494681 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 01:40:50.507087 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 01:40:50.854586 systemd[1]: Started sshd@1-172.24.4.153:22-172.24.4.1:54760.service - OpenSSH per-connection server daemon (172.24.4.1:54760). May 9 01:40:51.505997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:40:51.531719 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 01:40:52.156908 sshd[1584]: Accepted publickey for core from 172.24.4.1 port 54760 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:40:52.159335 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:40:52.180750 systemd-logind[1458]: New session 2 of user core. May 9 01:40:52.189106 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 01:40:52.798910 sshd[1599]: Connection closed by 172.24.4.1 port 54760 May 9 01:40:52.801411 sshd-session[1584]: pam_unix(sshd:session): session closed for user core May 9 01:40:52.821399 systemd[1]: sshd@1-172.24.4.153:22-172.24.4.1:54760.service: Deactivated successfully. May 9 01:40:52.825439 systemd[1]: session-2.scope: Deactivated successfully. May 9 01:40:52.830160 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. May 9 01:40:52.836154 systemd[1]: Started sshd@2-172.24.4.153:22-172.24.4.1:42380.service - OpenSSH per-connection server daemon (172.24.4.1:42380). May 9 01:40:52.847655 systemd-logind[1458]: Removed session 2. May 9 01:40:52.921250 kubelet[1592]: E0509 01:40:52.921172 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 01:40:52.923767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 01:40:52.923934 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 01:40:52.924272 systemd[1]: kubelet.service: Consumed 1.988s CPU time, 245.6M memory peak. May 9 01:40:53.835553 login[1537]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 9 01:40:53.848075 login[1538]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 9 01:40:53.852286 systemd-logind[1458]: New session 3 of user core. May 9 01:40:53.863214 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 01:40:53.870946 systemd-logind[1458]: New session 4 of user core. May 9 01:40:53.882308 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 01:40:54.210429 sshd[1605]: Accepted publickey for core from 172.24.4.1 port 42380 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:40:54.212520 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:40:54.223047 systemd-logind[1458]: New session 5 of user core. May 9 01:40:54.236330 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 01:40:54.852501 sshd[1636]: Connection closed by 172.24.4.1 port 42380 May 9 01:40:54.852217 sshd-session[1605]: pam_unix(sshd:session): session closed for user core May 9 01:40:54.859296 systemd[1]: sshd@2-172.24.4.153:22-172.24.4.1:42380.service: Deactivated successfully. May 9 01:40:54.863523 systemd[1]: session-5.scope: Deactivated successfully. May 9 01:40:54.868289 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. May 9 01:40:54.870456 systemd-logind[1458]: Removed session 5. May 9 01:40:54.965877 coreos-metadata[1445]: May 09 01:40:54.965 WARN failed to locate config-drive, using the metadata service API instead May 9 01:40:55.075054 coreos-metadata[1445]: May 09 01:40:55.074 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 9 01:40:55.398886 coreos-metadata[1445]: May 09 01:40:55.398 INFO Fetch successful May 9 01:40:55.398886 coreos-metadata[1445]: May 09 01:40:55.398 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 9 01:40:55.415105 coreos-metadata[1445]: May 09 01:40:55.415 INFO Fetch successful May 9 01:40:55.415105 coreos-metadata[1445]: May 09 01:40:55.415 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 9 01:40:55.429082 coreos-metadata[1445]: May 09 01:40:55.428 INFO Fetch successful May 9 01:40:55.429082 coreos-metadata[1445]: May 09 01:40:55.429 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 9 01:40:55.443250 coreos-metadata[1445]: May 09 01:40:55.443 INFO Fetch successful May 9 01:40:55.443250 coreos-metadata[1445]: May 09 01:40:55.443 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 9 01:40:55.457530 coreos-metadata[1445]: May 09 01:40:55.457 INFO Fetch successful May 9 01:40:55.457530 coreos-metadata[1445]: May 09 01:40:55.457 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 9 01:40:55.470902 coreos-metadata[1445]: May 09 01:40:55.470 INFO Fetch successful May 9 01:40:55.534426 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 9 01:40:55.536434 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 01:40:55.731392 coreos-metadata[1525]: May 09 01:40:55.730 WARN failed to locate config-drive, using the metadata service API instead May 9 01:40:55.773603 coreos-metadata[1525]: May 09 01:40:55.773 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 9 01:40:55.789713 coreos-metadata[1525]: May 09 01:40:55.789 INFO Fetch successful May 9 01:40:55.789713 coreos-metadata[1525]: May 09 01:40:55.789 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 9 01:40:55.803179 coreos-metadata[1525]: May 09 01:40:55.803 INFO Fetch successful May 9 01:40:55.814466 unknown[1525]: wrote ssh authorized keys file for user: core May 9 01:40:55.862058 update-ssh-keys[1650]: Updated "/home/core/.ssh/authorized_keys" May 9 01:40:55.863331 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 9 01:40:55.868650 systemd[1]: Finished sshkeys.service. May 9 01:40:55.873676 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 01:40:55.874317 systemd[1]: Startup finished in 1.217s (kernel) + 15.729s (initrd) + 11.386s (userspace) = 28.333s. May 9 01:41:02.965410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 01:41:02.969592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:41:03.312593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:41:03.320036 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 01:41:03.495956 kubelet[1661]: E0509 01:41:03.495700 1661 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 01:41:03.502687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 01:41:03.503123 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 01:41:03.504079 systemd[1]: kubelet.service: Consumed 313ms CPU time, 96.1M memory peak. May 9 01:41:04.876267 systemd[1]: Started sshd@3-172.24.4.153:22-172.24.4.1:54350.service - OpenSSH per-connection server daemon (172.24.4.1:54350). May 9 01:41:06.351940 sshd[1671]: Accepted publickey for core from 172.24.4.1 port 54350 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:41:06.354669 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:41:06.365384 systemd-logind[1458]: New session 6 of user core. May 9 01:41:06.374113 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 01:41:07.084090 sshd[1673]: Connection closed by 172.24.4.1 port 54350 May 9 01:41:07.082895 sshd-session[1671]: pam_unix(sshd:session): session closed for user core May 9 01:41:07.100684 systemd[1]: sshd@3-172.24.4.153:22-172.24.4.1:54350.service: Deactivated successfully. May 9 01:41:07.104061 systemd[1]: session-6.scope: Deactivated successfully. May 9 01:41:07.105664 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. May 9 01:41:07.109781 systemd[1]: Started sshd@4-172.24.4.153:22-172.24.4.1:54360.service - OpenSSH per-connection server daemon (172.24.4.1:54360). May 9 01:41:07.112170 systemd-logind[1458]: Removed session 6. May 9 01:41:08.299917 sshd[1678]: Accepted publickey for core from 172.24.4.1 port 54360 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:41:08.307759 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:41:08.317398 systemd-logind[1458]: New session 7 of user core. May 9 01:41:08.335176 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 01:41:08.865766 sshd[1681]: Connection closed by 172.24.4.1 port 54360 May 9 01:41:08.867040 sshd-session[1678]: pam_unix(sshd:session): session closed for user core May 9 01:41:08.882409 systemd[1]: sshd@4-172.24.4.153:22-172.24.4.1:54360.service: Deactivated successfully. May 9 01:41:08.885773 systemd[1]: session-7.scope: Deactivated successfully. May 9 01:41:08.889164 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. May 9 01:41:08.892528 systemd[1]: Started sshd@5-172.24.4.153:22-172.24.4.1:54370.service - OpenSSH per-connection server daemon (172.24.4.1:54370). May 9 01:41:08.896074 systemd-logind[1458]: Removed session 7. May 9 01:41:10.449681 sshd[1686]: Accepted publickey for core from 172.24.4.1 port 54370 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:41:10.452352 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:41:10.462989 systemd-logind[1458]: New session 8 of user core. May 9 01:41:10.472110 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 01:41:11.319641 sshd[1689]: Connection closed by 172.24.4.1 port 54370 May 9 01:41:11.320724 sshd-session[1686]: pam_unix(sshd:session): session closed for user core May 9 01:41:11.338058 systemd[1]: sshd@5-172.24.4.153:22-172.24.4.1:54370.service: Deactivated successfully. May 9 01:41:11.341430 systemd[1]: session-8.scope: Deactivated successfully. May 9 01:41:11.343535 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. May 9 01:41:11.350519 systemd[1]: Started sshd@6-172.24.4.153:22-172.24.4.1:54372.service - OpenSSH per-connection server daemon (172.24.4.1:54372). May 9 01:41:11.353221 systemd-logind[1458]: Removed session 8. May 9 01:41:12.935553 sshd[1694]: Accepted publickey for core from 172.24.4.1 port 54372 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:41:12.938238 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:41:12.949153 systemd-logind[1458]: New session 9 of user core. May 9 01:41:12.967105 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 01:41:13.376146 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 01:41:13.376843 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 01:41:13.394519 sudo[1698]: pam_unix(sudo:session): session closed for user root May 9 01:41:13.662028 sshd[1697]: Connection closed by 172.24.4.1 port 54372 May 9 01:41:13.663030 sshd-session[1694]: pam_unix(sshd:session): session closed for user core May 9 01:41:13.676981 systemd[1]: sshd@6-172.24.4.153:22-172.24.4.1:54372.service: Deactivated successfully. May 9 01:41:13.680042 systemd[1]: session-9.scope: Deactivated successfully. May 9 01:41:13.683181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 01:41:13.684652 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. May 9 01:41:13.689323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:41:13.692327 systemd[1]: Started sshd@7-172.24.4.153:22-172.24.4.1:39174.service - OpenSSH per-connection server daemon (172.24.4.1:39174). May 9 01:41:13.698332 systemd-logind[1458]: Removed session 9. May 9 01:41:14.003146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:41:14.012397 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 01:41:14.076114 kubelet[1714]: E0509 01:41:14.075997 1714 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 01:41:14.081430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 01:41:14.082019 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 01:41:14.082978 systemd[1]: kubelet.service: Consumed 259ms CPU time, 96.3M memory peak. May 9 01:41:14.948358 sshd[1704]: Accepted publickey for core from 172.24.4.1 port 39174 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:41:14.951250 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:41:14.962110 systemd-logind[1458]: New session 10 of user core. May 9 01:41:14.971125 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 01:41:15.553531 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 01:41:15.554980 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 01:41:15.562463 sudo[1724]: pam_unix(sudo:session): session closed for user root May 9 01:41:15.572938 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 9 01:41:15.573505 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 01:41:15.592304 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 9 01:41:15.659677 augenrules[1746]: No rules May 9 01:41:15.661541 systemd[1]: audit-rules.service: Deactivated successfully. May 9 01:41:15.662011 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 9 01:41:15.664231 sudo[1723]: pam_unix(sudo:session): session closed for user root May 9 01:41:15.859684 sshd[1722]: Connection closed by 172.24.4.1 port 39174 May 9 01:41:15.860211 sshd-session[1704]: pam_unix(sshd:session): session closed for user core May 9 01:41:15.875644 systemd[1]: sshd@7-172.24.4.153:22-172.24.4.1:39174.service: Deactivated successfully. May 9 01:41:15.879071 systemd[1]: session-10.scope: Deactivated successfully. May 9 01:41:15.881884 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. May 9 01:41:15.884781 systemd[1]: Started sshd@8-172.24.4.153:22-172.24.4.1:39180.service - OpenSSH per-connection server daemon (172.24.4.1:39180). May 9 01:41:15.887531 systemd-logind[1458]: Removed session 10. May 9 01:41:17.275408 sshd[1754]: Accepted publickey for core from 172.24.4.1 port 39180 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:41:17.278076 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:41:17.289239 systemd-logind[1458]: New session 11 of user core. May 9 01:41:17.300134 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 01:41:17.839553 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 01:41:17.840189 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 01:41:18.599245 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 01:41:18.614731 (dockerd)[1778]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 01:41:19.499885 dockerd[1778]: time="2025-05-09T01:41:19.499833815Z" level=info msg="Starting up" May 9 01:41:19.502029 dockerd[1778]: time="2025-05-09T01:41:19.501781198Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 9 01:41:19.558657 systemd[1]: var-lib-docker-metacopy\x2dcheck3508051526-merged.mount: Deactivated successfully. May 9 01:41:20.464334 systemd-resolved[1393]: Clock change detected. Flushing caches. May 9 01:41:20.464888 systemd-timesyncd[1415]: Contacted time server 216.229.4.66:123 (2.flatcar.pool.ntp.org). May 9 01:41:20.464943 systemd-timesyncd[1415]: Initial clock synchronization to Fri 2025-05-09 01:41:20.464269 UTC. May 9 01:41:20.476195 dockerd[1778]: time="2025-05-09T01:41:20.476099944Z" level=info msg="Loading containers: start." May 9 01:41:20.662451 kernel: Initializing XFRM netlink socket May 9 01:41:20.864240 systemd-networkd[1388]: docker0: Link UP May 9 01:41:21.078033 dockerd[1778]: time="2025-05-09T01:41:21.077882708Z" level=info msg="Loading containers: done." May 9 01:41:21.144146 dockerd[1778]: time="2025-05-09T01:41:21.142733044Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 01:41:21.144146 dockerd[1778]: time="2025-05-09T01:41:21.143043076Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 9 01:41:21.144146 dockerd[1778]: time="2025-05-09T01:41:21.143430913Z" level=info msg="Daemon has completed initialization" May 9 01:41:21.145539 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1288583744-merged.mount: Deactivated successfully. May 9 01:41:21.217744 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 01:41:21.220242 dockerd[1778]: time="2025-05-09T01:41:21.218175837Z" level=info msg="API listen on /run/docker.sock" May 9 01:41:23.483829 containerd[1483]: time="2025-05-09T01:41:23.483750943Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 01:41:24.511183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171397289.mount: Deactivated successfully. May 9 01:41:25.085574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 9 01:41:25.089019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:41:25.210318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:41:25.220337 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 01:41:25.272717 kubelet[2041]: E0509 01:41:25.272666 2041 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 01:41:25.275635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 01:41:25.275797 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 01:41:25.276350 systemd[1]: kubelet.service: Consumed 149ms CPU time, 95.4M memory peak. May 9 01:41:26.547642 containerd[1483]: time="2025-05-09T01:41:26.547570816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:26.553052 containerd[1483]: time="2025-05-09T01:41:26.552996011Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674881" May 9 01:41:26.554827 containerd[1483]: time="2025-05-09T01:41:26.554753527Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:26.557901 containerd[1483]: time="2025-05-09T01:41:26.557854884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:26.559086 containerd[1483]: time="2025-05-09T01:41:26.558916936Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 3.075091043s" May 9 01:41:26.559086 containerd[1483]: time="2025-05-09T01:41:26.558950138Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 9 01:41:26.578618 containerd[1483]: time="2025-05-09T01:41:26.578580836Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 9 01:41:29.725721 containerd[1483]: time="2025-05-09T01:41:29.725485144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:29.741877 containerd[1483]: time="2025-05-09T01:41:29.741698412Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617542" May 9 01:41:29.759582 containerd[1483]: time="2025-05-09T01:41:29.759467888Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:29.833232 containerd[1483]: time="2025-05-09T01:41:29.833045864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:29.836138 containerd[1483]: time="2025-05-09T01:41:29.835738033Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 3.256949387s" May 9 01:41:29.836138 containerd[1483]: time="2025-05-09T01:41:29.835824205Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 9 01:41:29.891499 containerd[1483]: time="2025-05-09T01:41:29.891367777Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 9 01:41:31.960944 containerd[1483]: time="2025-05-09T01:41:31.960639475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:31.962282 containerd[1483]: time="2025-05-09T01:41:31.962223266Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903690" May 9 01:41:31.963387 containerd[1483]: time="2025-05-09T01:41:31.963325633Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:31.966443 containerd[1483]: time="2025-05-09T01:41:31.966399999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:31.967635 containerd[1483]: time="2025-05-09T01:41:31.967498149Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 2.076056985s" May 9 01:41:31.967635 containerd[1483]: time="2025-05-09T01:41:31.967543093Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 9 01:41:31.990427 containerd[1483]: time="2025-05-09T01:41:31.990163347Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 01:41:33.596915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1839131137.mount: Deactivated successfully. May 9 01:41:34.427259 update_engine[1469]: I20250509 01:41:34.427146 1469 update_attempter.cc:509] Updating boot flags... May 9 01:41:34.789125 containerd[1483]: time="2025-05-09T01:41:34.787149434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:34.792166 containerd[1483]: time="2025-05-09T01:41:34.792046468Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185825" May 9 01:41:34.798376 containerd[1483]: time="2025-05-09T01:41:34.798244563Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:34.805730 containerd[1483]: time="2025-05-09T01:41:34.805624774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:34.807738 containerd[1483]: time="2025-05-09T01:41:34.807671082Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.817444426s" May 9 01:41:34.808604 containerd[1483]: time="2025-05-09T01:41:34.808379831Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 9 01:41:34.862039 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2096) May 9 01:41:34.878669 containerd[1483]: time="2025-05-09T01:41:34.878351634Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 01:41:35.336532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 9 01:41:35.340149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:41:35.536888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:41:35.547368 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 01:41:35.599503 kubelet[2116]: E0509 01:41:35.599386 2116 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 01:41:35.604548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 01:41:35.604879 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 01:41:35.606070 systemd[1]: kubelet.service: Consumed 217ms CPU time, 97.6M memory peak. May 9 01:41:36.453126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount234758537.mount: Deactivated successfully. May 9 01:41:37.703238 containerd[1483]: time="2025-05-09T01:41:37.703156885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:37.704997 containerd[1483]: time="2025-05-09T01:41:37.704819363Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 9 01:41:37.706372 containerd[1483]: time="2025-05-09T01:41:37.706319927Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:37.709220 containerd[1483]: time="2025-05-09T01:41:37.709170534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:37.710996 containerd[1483]: time="2025-05-09T01:41:37.710280115Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.83185859s" May 9 01:41:37.710996 containerd[1483]: time="2025-05-09T01:41:37.710331962Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 9 01:41:37.733373 containerd[1483]: time="2025-05-09T01:41:37.733318373Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 9 01:41:38.527182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409063728.mount: Deactivated successfully. May 9 01:41:38.536377 containerd[1483]: time="2025-05-09T01:41:38.536104779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:38.537889 containerd[1483]: time="2025-05-09T01:41:38.537769361Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" May 9 01:41:38.541000 containerd[1483]: time="2025-05-09T01:41:38.539072134Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:38.543432 containerd[1483]: time="2025-05-09T01:41:38.543401624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:38.545196 containerd[1483]: time="2025-05-09T01:41:38.545131438Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 811.763783ms" May 9 01:41:38.545252 containerd[1483]: time="2025-05-09T01:41:38.545205417Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 9 01:41:38.577951 containerd[1483]: time="2025-05-09T01:41:38.577922708Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 9 01:41:39.171204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241031803.mount: Deactivated successfully. May 9 01:41:42.871207 containerd[1483]: time="2025-05-09T01:41:42.870948677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:42.875297 containerd[1483]: time="2025-05-09T01:41:42.875079485Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" May 9 01:41:42.878340 containerd[1483]: time="2025-05-09T01:41:42.878178286Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:42.887055 containerd[1483]: time="2025-05-09T01:41:42.886345364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:41:42.891144 containerd[1483]: time="2025-05-09T01:41:42.890430385Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.312272576s" May 9 01:41:42.891144 containerd[1483]: time="2025-05-09T01:41:42.890542195Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 9 01:41:45.837395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 9 01:41:45.844436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:41:46.164171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:41:46.175561 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 01:41:46.237022 kubelet[2316]: E0509 01:41:46.236945 2316 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 01:41:46.239994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 01:41:46.240138 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 01:41:46.240455 systemd[1]: kubelet.service: Consumed 236ms CPU time, 96.6M memory peak. May 9 01:41:46.665705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:41:46.666565 systemd[1]: kubelet.service: Consumed 236ms CPU time, 96.6M memory peak. May 9 01:41:46.672222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:41:46.716243 systemd[1]: Reload requested from client PID 2330 ('systemctl') (unit session-11.scope)... May 9 01:41:46.716279 systemd[1]: Reloading... May 9 01:41:46.835004 zram_generator::config[2376]: No configuration found. May 9 01:41:47.190735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 01:41:47.318599 systemd[1]: Reloading finished in 600 ms. May 9 01:41:47.376579 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 01:41:47.376658 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 01:41:47.377214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:41:47.377269 systemd[1]: kubelet.service: Consumed 96ms CPU time, 83.6M memory peak. May 9 01:41:47.380295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:41:49.127114 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:41:49.148944 (kubelet)[2442]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 01:41:49.309996 kubelet[2442]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 01:41:49.309996 kubelet[2442]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 01:41:49.309996 kubelet[2442]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 01:41:49.309996 kubelet[2442]: I0509 01:41:49.308537 2442 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 01:41:50.041378 kubelet[2442]: I0509 01:41:50.041323 2442 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 01:41:50.041555 kubelet[2442]: I0509 01:41:50.041544 2442 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 01:41:50.041886 kubelet[2442]: I0509 01:41:50.041870 2442 server.go:927] "Client rotation is on, will bootstrap in background" May 9 01:41:50.066930 kubelet[2442]: I0509 01:41:50.066903 2442 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 01:41:50.070430 kubelet[2442]: E0509 01:41:50.070291 2442 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:50.080104 kubelet[2442]: I0509 01:41:50.079939 2442 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 01:41:50.080358 kubelet[2442]: I0509 01:41:50.080232 2442 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 01:41:50.080451 kubelet[2442]: I0509 01:41:50.080258 2442 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-bbb05de7dc.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 01:41:50.081457 kubelet[2442]: I0509 01:41:50.081392 2442 topology_manager.go:138] "Creating topology manager with none policy" May 9 01:41:50.081457 kubelet[2442]: I0509 01:41:50.081418 2442 container_manager_linux.go:301] "Creating device plugin manager" May 9 01:41:50.081614 kubelet[2442]: I0509 01:41:50.081545 2442 state_mem.go:36] "Initialized new in-memory state store" May 9 01:41:50.083120 kubelet[2442]: I0509 01:41:50.083064 2442 kubelet.go:400] "Attempting to sync node with API server" May 9 01:41:50.083594 kubelet[2442]: W0509 01:41:50.083505 2442 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-bbb05de7dc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:50.083594 kubelet[2442]: E0509 01:41:50.083574 2442 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-bbb05de7dc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:50.083594 kubelet[2442]: I0509 01:41:50.083090 2442 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 01:41:50.083798 kubelet[2442]: I0509 01:41:50.083629 2442 kubelet.go:312] "Adding apiserver pod source" May 9 01:41:50.083798 kubelet[2442]: I0509 01:41:50.083662 2442 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 01:41:50.090892 kubelet[2442]: W0509 01:41:50.090258 2442 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:50.090892 kubelet[2442]: E0509 01:41:50.090306 2442 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:50.090892 kubelet[2442]: I0509 01:41:50.090654 2442 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 9 01:41:50.093403 kubelet[2442]: I0509 01:41:50.092473 2442 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 01:41:50.093403 kubelet[2442]: W0509 01:41:50.092538 2442 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 01:41:50.093403 kubelet[2442]: I0509 01:41:50.093098 2442 server.go:1264] "Started kubelet" May 9 01:41:50.107704 kubelet[2442]: I0509 01:41:50.107621 2442 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 01:41:50.117284 kubelet[2442]: I0509 01:41:50.117244 2442 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 01:41:50.118990 kubelet[2442]: I0509 01:41:50.118887 2442 server.go:455] "Adding debug handlers to kubelet server" May 9 01:41:50.125986 kubelet[2442]: I0509 01:41:50.124055 2442 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 01:41:50.125986 kubelet[2442]: I0509 01:41:50.124313 2442 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 01:41:50.125986 kubelet[2442]: E0509 01:41:50.124438 2442 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.153:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.153:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-n-bbb05de7dc.novalocal.183db85605c46f26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-bbb05de7dc.novalocal,UID:ci-4284-0-0-n-bbb05de7dc.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-bbb05de7dc.novalocal,},FirstTimestamp:2025-05-09 01:41:50.09307831 +0000 UTC m=+0.935702782,LastTimestamp:2025-05-09 01:41:50.09307831 +0000 UTC m=+0.935702782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-bbb05de7dc.novalocal,}" May 9 01:41:50.125986 kubelet[2442]: E0509 01:41:50.124753 2442 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-bbb05de7dc.novalocal\" not found" May 9 01:41:50.125986 kubelet[2442]: I0509 01:41:50.124843 2442 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 01:41:50.125986 kubelet[2442]: I0509 01:41:50.125131 2442 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 01:41:50.125986 kubelet[2442]: I0509 01:41:50.125277 2442 reconciler.go:26] "Reconciler: start to sync state" May 9 01:41:50.126978 kubelet[2442]: W0509 01:41:50.125958 2442 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:50.127050 kubelet[2442]: E0509 01:41:50.127029 2442 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:50.127974 kubelet[2442]: E0509 01:41:50.127894 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-bbb05de7dc.novalocal?timeout=10s\": dial tcp 172.24.4.153:6443: connect: connection refused" interval="200ms" May 9 01:41:50.128456 kubelet[2442]: I0509 01:41:50.128410 2442 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 01:41:50.132797 kubelet[2442]: I0509 01:41:50.132753 2442 factory.go:221] Registration of the containerd container factory successfully May 9 01:41:50.132797 kubelet[2442]: I0509 01:41:50.132795 2442 factory.go:221] Registration of the systemd container factory successfully May 9 01:41:50.140335 kubelet[2442]: I0509 01:41:50.140281 2442 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 01:41:50.141387 kubelet[2442]: I0509 01:41:50.141372 2442 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 01:41:50.141463 kubelet[2442]: I0509 01:41:50.141454 2442 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 01:41:50.141774 kubelet[2442]: I0509 01:41:50.141532 2442 kubelet.go:2337] "Starting kubelet main sync loop" May 9 01:41:50.141774 kubelet[2442]: E0509 01:41:50.141581 2442 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 01:41:50.154311 kubelet[2442]: W0509 01:41:50.154218 2442 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:50.154405 kubelet[2442]: E0509 01:41:50.154334 2442 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:50.161806 kubelet[2442]: E0509 01:41:50.161762 2442 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 01:41:50.167607 kubelet[2442]: I0509 01:41:50.167341 2442 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 01:41:50.167607 kubelet[2442]: I0509 01:41:50.167362 2442 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 01:41:50.167607 kubelet[2442]: I0509 01:41:50.167378 2442 state_mem.go:36] "Initialized new in-memory state store" May 9 01:41:50.173463 kubelet[2442]: I0509 01:41:50.173450 2442 policy_none.go:49] "None policy: Start" May 9 01:41:50.174319 kubelet[2442]: I0509 01:41:50.174308 2442 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 01:41:50.174428 kubelet[2442]: I0509 01:41:50.174418 2442 state_mem.go:35] "Initializing new in-memory state store" May 9 01:41:50.204879 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 01:41:50.227736 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 01:41:50.232092 kubelet[2442]: I0509 01:41:50.232034 2442 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.233594 kubelet[2442]: E0509 01:41:50.233518 2442 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.153:6443/api/v1/nodes\": dial tcp 172.24.4.153:6443: connect: connection refused" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.239859 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 01:41:50.241770 kubelet[2442]: E0509 01:41:50.241735 2442 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 01:41:50.255288 kubelet[2442]: I0509 01:41:50.255128 2442 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 01:41:50.256307 kubelet[2442]: I0509 01:41:50.255917 2442 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 01:41:50.257436 kubelet[2442]: I0509 01:41:50.257200 2442 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 01:41:50.260741 kubelet[2442]: E0509 01:41:50.260644 2442 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-n-bbb05de7dc.novalocal\" not found" May 9 01:41:50.329767 kubelet[2442]: E0509 01:41:50.329529 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-bbb05de7dc.novalocal?timeout=10s\": dial tcp 172.24.4.153:6443: connect: connection refused" interval="400ms" May 9 01:41:50.437766 kubelet[2442]: I0509 01:41:50.437679 2442 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.438264 kubelet[2442]: E0509 01:41:50.438191 2442 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.153:6443/api/v1/nodes\": dial tcp 172.24.4.153:6443: connect: connection refused" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.442866 kubelet[2442]: I0509 01:41:50.442393 2442 topology_manager.go:215] "Topology Admit Handler" podUID="6cc20b4385be144e1b2e55e6434c22e3" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.445683 kubelet[2442]: I0509 01:41:50.445628 2442 topology_manager.go:215] "Topology Admit Handler" podUID="80d931ceda2a65aff40a80d00d44fb7d" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.448222 kubelet[2442]: I0509 01:41:50.448169 2442 topology_manager.go:215] "Topology Admit Handler" podUID="00ae8c922f166b57877cf66024788381" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.471541 systemd[1]: Created slice kubepods-burstable-pod80d931ceda2a65aff40a80d00d44fb7d.slice - libcontainer container kubepods-burstable-pod80d931ceda2a65aff40a80d00d44fb7d.slice. May 9 01:41:50.499647 systemd[1]: Created slice kubepods-burstable-pod6cc20b4385be144e1b2e55e6434c22e3.slice - libcontainer container kubepods-burstable-pod6cc20b4385be144e1b2e55e6434c22e3.slice. May 9 01:41:50.524701 systemd[1]: Created slice kubepods-burstable-pod00ae8c922f166b57877cf66024788381.slice - libcontainer container kubepods-burstable-pod00ae8c922f166b57877cf66024788381.slice. May 9 01:41:50.528132 kubelet[2442]: I0509 01:41:50.527784 2442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cc20b4385be144e1b2e55e6434c22e3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"6cc20b4385be144e1b2e55e6434c22e3\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.528132 kubelet[2442]: I0509 01:41:50.527864 2442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80d931ceda2a65aff40a80d00d44fb7d-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"80d931ceda2a65aff40a80d00d44fb7d\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.528132 kubelet[2442]: I0509 01:41:50.527925 2442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80d931ceda2a65aff40a80d00d44fb7d-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"80d931ceda2a65aff40a80d00d44fb7d\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.528132 kubelet[2442]: I0509 01:41:50.528022 2442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80d931ceda2a65aff40a80d00d44fb7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"80d931ceda2a65aff40a80d00d44fb7d\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.528478 kubelet[2442]: I0509 01:41:50.528078 2442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00ae8c922f166b57877cf66024788381-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"00ae8c922f166b57877cf66024788381\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.528478 kubelet[2442]: I0509 01:41:50.528202 2442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cc20b4385be144e1b2e55e6434c22e3-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"6cc20b4385be144e1b2e55e6434c22e3\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.528478 kubelet[2442]: I0509 01:41:50.528315 2442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cc20b4385be144e1b2e55e6434c22e3-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"6cc20b4385be144e1b2e55e6434c22e3\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.528478 kubelet[2442]: I0509 01:41:50.528374 2442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80d931ceda2a65aff40a80d00d44fb7d-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"80d931ceda2a65aff40a80d00d44fb7d\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.528478 kubelet[2442]: I0509 01:41:50.528434 2442 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80d931ceda2a65aff40a80d00d44fb7d-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"80d931ceda2a65aff40a80d00d44fb7d\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.731324 kubelet[2442]: E0509 01:41:50.731199 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-bbb05de7dc.novalocal?timeout=10s\": dial tcp 172.24.4.153:6443: connect: connection refused" interval="800ms" May 9 01:41:50.793015 containerd[1483]: time="2025-05-09T01:41:50.792899574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal,Uid:80d931ceda2a65aff40a80d00d44fb7d,Namespace:kube-system,Attempt:0,}" May 9 01:41:50.820490 containerd[1483]: time="2025-05-09T01:41:50.820322185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal,Uid:6cc20b4385be144e1b2e55e6434c22e3,Namespace:kube-system,Attempt:0,}" May 9 01:41:50.835742 containerd[1483]: time="2025-05-09T01:41:50.834210723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-bbb05de7dc.novalocal,Uid:00ae8c922f166b57877cf66024788381,Namespace:kube-system,Attempt:0,}" May 9 01:41:50.843604 kubelet[2442]: I0509 01:41:50.843210 2442 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:50.844214 kubelet[2442]: E0509 01:41:50.844076 2442 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.153:6443/api/v1/nodes\": dial tcp 172.24.4.153:6443: connect: connection refused" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:51.424816 kubelet[2442]: W0509 01:41:51.424682 2442 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:51.424816 kubelet[2442]: E0509 01:41:51.424805 2442 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:51.533098 kubelet[2442]: E0509 01:41:51.532946 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-bbb05de7dc.novalocal?timeout=10s\": dial tcp 172.24.4.153:6443: connect: connection refused" interval="1.6s" May 9 01:41:51.637455 kubelet[2442]: W0509 01:41:51.637117 2442 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:51.637455 kubelet[2442]: E0509 01:41:51.637284 2442 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:51.648929 kubelet[2442]: I0509 01:41:51.648489 2442 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:51.649439 kubelet[2442]: E0509 01:41:51.649385 2442 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.153:6443/api/v1/nodes\": dial tcp 172.24.4.153:6443: connect: connection refused" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:51.652321 kubelet[2442]: W0509 01:41:51.652224 2442 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-bbb05de7dc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:51.652579 kubelet[2442]: E0509 01:41:51.652536 2442 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-bbb05de7dc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:51.690186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376552477.mount: Deactivated successfully. May 9 01:41:51.706649 containerd[1483]: time="2025-05-09T01:41:51.706534616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 01:41:51.710370 containerd[1483]: time="2025-05-09T01:41:51.710260785Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 9 01:41:51.711551 kubelet[2442]: W0509 01:41:51.711430 2442 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:51.711551 kubelet[2442]: E0509 01:41:51.711514 2442 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:51.714070 containerd[1483]: time="2025-05-09T01:41:51.713701348Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 01:41:51.715499 containerd[1483]: time="2025-05-09T01:41:51.715443826Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 01:41:51.718695 containerd[1483]: time="2025-05-09T01:41:51.718560461Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 9 01:41:51.721052 containerd[1483]: time="2025-05-09T01:41:51.720468560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 9 01:41:51.721052 containerd[1483]: time="2025-05-09T01:41:51.720630694Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 01:41:51.727225 containerd[1483]: time="2025-05-09T01:41:51.727100027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 01:41:51.732545 containerd[1483]: time="2025-05-09T01:41:51.731268114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 900.97073ms" May 9 01:41:51.735690 containerd[1483]: time="2025-05-09T01:41:51.735578779Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 937.330623ms" May 9 01:41:51.738604 containerd[1483]: time="2025-05-09T01:41:51.738456045Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 852.908019ms" May 9 01:41:51.813887 containerd[1483]: time="2025-05-09T01:41:51.813634362Z" level=info msg="connecting to shim 748f80c43dd4f281001432c6dc884afd775c86017e85aa3e2d14c6db6a361fc6" address="unix:///run/containerd/s/40d759f6102056eebf28cc5d597ae4325b33f0ffc31f63bc7c0811ef13d7ba9f" namespace=k8s.io protocol=ttrpc version=3 May 9 01:41:51.822361 containerd[1483]: time="2025-05-09T01:41:51.822295296Z" level=info msg="connecting to shim faa10c70b8d8202a34ee9b4a8f4ae3d072fe2107a045c51d05737ea434d4344c" address="unix:///run/containerd/s/bf1b318224adee0cf6a5f2addccbeec7e0ecf44948709a3036e8f5f6e6ae37c4" namespace=k8s.io protocol=ttrpc version=3 May 9 01:41:51.835974 containerd[1483]: time="2025-05-09T01:41:51.835554814Z" level=info msg="connecting to shim 4df8256ded28f16d5db8fe3b89fa36b5a861f5dbac9c9cbd0a43c2495546f7d9" address="unix:///run/containerd/s/e99953d991061700d28b344f6312b450065b544bec2d8fbb0c344ab0de70a7b0" namespace=k8s.io protocol=ttrpc version=3 May 9 01:41:51.867163 systemd[1]: Started cri-containerd-748f80c43dd4f281001432c6dc884afd775c86017e85aa3e2d14c6db6a361fc6.scope - libcontainer container 748f80c43dd4f281001432c6dc884afd775c86017e85aa3e2d14c6db6a361fc6. May 9 01:41:51.949651 systemd[1]: Started cri-containerd-4df8256ded28f16d5db8fe3b89fa36b5a861f5dbac9c9cbd0a43c2495546f7d9.scope - libcontainer container 4df8256ded28f16d5db8fe3b89fa36b5a861f5dbac9c9cbd0a43c2495546f7d9. May 9 01:41:51.959852 systemd[1]: Started cri-containerd-faa10c70b8d8202a34ee9b4a8f4ae3d072fe2107a045c51d05737ea434d4344c.scope - libcontainer container faa10c70b8d8202a34ee9b4a8f4ae3d072fe2107a045c51d05737ea434d4344c. May 9 01:41:52.064981 containerd[1483]: time="2025-05-09T01:41:52.063769941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal,Uid:6cc20b4385be144e1b2e55e6434c22e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"748f80c43dd4f281001432c6dc884afd775c86017e85aa3e2d14c6db6a361fc6\"" May 9 01:41:52.070080 containerd[1483]: time="2025-05-09T01:41:52.068623093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal,Uid:80d931ceda2a65aff40a80d00d44fb7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4df8256ded28f16d5db8fe3b89fa36b5a861f5dbac9c9cbd0a43c2495546f7d9\"" May 9 01:41:52.070514 containerd[1483]: time="2025-05-09T01:41:52.070433419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-bbb05de7dc.novalocal,Uid:00ae8c922f166b57877cf66024788381,Namespace:kube-system,Attempt:0,} returns sandbox id \"faa10c70b8d8202a34ee9b4a8f4ae3d072fe2107a045c51d05737ea434d4344c\"" May 9 01:41:52.075607 containerd[1483]: time="2025-05-09T01:41:52.075578478Z" level=info msg="CreateContainer within sandbox \"748f80c43dd4f281001432c6dc884afd775c86017e85aa3e2d14c6db6a361fc6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 01:41:52.076201 containerd[1483]: time="2025-05-09T01:41:52.076152014Z" level=info msg="CreateContainer within sandbox \"faa10c70b8d8202a34ee9b4a8f4ae3d072fe2107a045c51d05737ea434d4344c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 01:41:52.077756 kubelet[2442]: E0509 01:41:52.077729 2442 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.153:6443: connect: connection refused May 9 01:41:52.091295 containerd[1483]: time="2025-05-09T01:41:52.091263716Z" level=info msg="Container 2e3a27e5fbcbfbf0ecccdcab7a20e1ba42a136bd38fe033ab812a65fd7bbff20: CDI devices from CRI Config.CDIDevices: []" May 9 01:41:52.094772 containerd[1483]: time="2025-05-09T01:41:52.094689130Z" level=info msg="CreateContainer within sandbox \"4df8256ded28f16d5db8fe3b89fa36b5a861f5dbac9c9cbd0a43c2495546f7d9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 01:41:52.108039 containerd[1483]: time="2025-05-09T01:41:52.107482294Z" level=info msg="CreateContainer within sandbox \"748f80c43dd4f281001432c6dc884afd775c86017e85aa3e2d14c6db6a361fc6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2e3a27e5fbcbfbf0ecccdcab7a20e1ba42a136bd38fe033ab812a65fd7bbff20\"" May 9 01:41:52.108612 containerd[1483]: time="2025-05-09T01:41:52.108574252Z" level=info msg="StartContainer for \"2e3a27e5fbcbfbf0ecccdcab7a20e1ba42a136bd38fe033ab812a65fd7bbff20\"" May 9 01:41:52.111095 containerd[1483]: time="2025-05-09T01:41:52.111062719Z" level=info msg="connecting to shim 2e3a27e5fbcbfbf0ecccdcab7a20e1ba42a136bd38fe033ab812a65fd7bbff20" address="unix:///run/containerd/s/40d759f6102056eebf28cc5d597ae4325b33f0ffc31f63bc7c0811ef13d7ba9f" protocol=ttrpc version=3 May 9 01:41:52.117338 containerd[1483]: time="2025-05-09T01:41:52.117279048Z" level=info msg="Container fa584e48973dd9c101627ebd1781d81912dda3c9317945899997295f7ee5b486: CDI devices from CRI Config.CDIDevices: []" May 9 01:41:52.132216 containerd[1483]: time="2025-05-09T01:41:52.132081920Z" level=info msg="Container fccc4b6b6194d72a3101e2a9dfdd36b3ec7dd2a6d486c1671f4f5fc3751d06cc: CDI devices from CRI Config.CDIDevices: []" May 9 01:41:52.141153 systemd[1]: Started cri-containerd-2e3a27e5fbcbfbf0ecccdcab7a20e1ba42a136bd38fe033ab812a65fd7bbff20.scope - libcontainer container 2e3a27e5fbcbfbf0ecccdcab7a20e1ba42a136bd38fe033ab812a65fd7bbff20. May 9 01:41:52.145897 containerd[1483]: time="2025-05-09T01:41:52.144911252Z" level=info msg="CreateContainer within sandbox \"faa10c70b8d8202a34ee9b4a8f4ae3d072fe2107a045c51d05737ea434d4344c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa584e48973dd9c101627ebd1781d81912dda3c9317945899997295f7ee5b486\"" May 9 01:41:52.147196 containerd[1483]: time="2025-05-09T01:41:52.146407027Z" level=info msg="StartContainer for \"fa584e48973dd9c101627ebd1781d81912dda3c9317945899997295f7ee5b486\"" May 9 01:41:52.148445 containerd[1483]: time="2025-05-09T01:41:52.148403622Z" level=info msg="connecting to shim fa584e48973dd9c101627ebd1781d81912dda3c9317945899997295f7ee5b486" address="unix:///run/containerd/s/bf1b318224adee0cf6a5f2addccbeec7e0ecf44948709a3036e8f5f6e6ae37c4" protocol=ttrpc version=3 May 9 01:41:52.150997 containerd[1483]: time="2025-05-09T01:41:52.150928908Z" level=info msg="CreateContainer within sandbox \"4df8256ded28f16d5db8fe3b89fa36b5a861f5dbac9c9cbd0a43c2495546f7d9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fccc4b6b6194d72a3101e2a9dfdd36b3ec7dd2a6d486c1671f4f5fc3751d06cc\"" May 9 01:41:52.152946 containerd[1483]: time="2025-05-09T01:41:52.152560658Z" level=info msg="StartContainer for \"fccc4b6b6194d72a3101e2a9dfdd36b3ec7dd2a6d486c1671f4f5fc3751d06cc\"" May 9 01:41:52.154674 containerd[1483]: time="2025-05-09T01:41:52.154648193Z" level=info msg="connecting to shim fccc4b6b6194d72a3101e2a9dfdd36b3ec7dd2a6d486c1671f4f5fc3751d06cc" address="unix:///run/containerd/s/e99953d991061700d28b344f6312b450065b544bec2d8fbb0c344ab0de70a7b0" protocol=ttrpc version=3 May 9 01:41:52.209212 systemd[1]: Started cri-containerd-fccc4b6b6194d72a3101e2a9dfdd36b3ec7dd2a6d486c1671f4f5fc3751d06cc.scope - libcontainer container fccc4b6b6194d72a3101e2a9dfdd36b3ec7dd2a6d486c1671f4f5fc3751d06cc. May 9 01:41:52.223163 systemd[1]: Started cri-containerd-fa584e48973dd9c101627ebd1781d81912dda3c9317945899997295f7ee5b486.scope - libcontainer container fa584e48973dd9c101627ebd1781d81912dda3c9317945899997295f7ee5b486. May 9 01:41:52.253278 containerd[1483]: time="2025-05-09T01:41:52.253214454Z" level=info msg="StartContainer for \"2e3a27e5fbcbfbf0ecccdcab7a20e1ba42a136bd38fe033ab812a65fd7bbff20\" returns successfully" May 9 01:41:52.278933 kubelet[2442]: E0509 01:41:52.277579 2442 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.153:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.153:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-n-bbb05de7dc.novalocal.183db85605c46f26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-bbb05de7dc.novalocal,UID:ci-4284-0-0-n-bbb05de7dc.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-bbb05de7dc.novalocal,},FirstTimestamp:2025-05-09 01:41:50.09307831 +0000 UTC m=+0.935702782,LastTimestamp:2025-05-09 01:41:50.09307831 +0000 UTC m=+0.935702782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-bbb05de7dc.novalocal,}" May 9 01:41:52.307123 containerd[1483]: time="2025-05-09T01:41:52.307087864Z" level=info msg="StartContainer for \"fccc4b6b6194d72a3101e2a9dfdd36b3ec7dd2a6d486c1671f4f5fc3751d06cc\" returns successfully" May 9 01:41:52.348077 containerd[1483]: time="2025-05-09T01:41:52.348029320Z" level=info msg="StartContainer for \"fa584e48973dd9c101627ebd1781d81912dda3c9317945899997295f7ee5b486\" returns successfully" May 9 01:41:53.260070 kubelet[2442]: I0509 01:41:53.260030 2442 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:54.973680 kubelet[2442]: I0509 01:41:54.973281 2442 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:55.036133 kubelet[2442]: E0509 01:41:55.036088 2442 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" May 9 01:41:55.092529 kubelet[2442]: I0509 01:41:55.092482 2442 apiserver.go:52] "Watching apiserver" May 9 01:41:55.126127 kubelet[2442]: I0509 01:41:55.126066 2442 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 01:41:55.198769 kubelet[2442]: E0509 01:41:55.198722 2442 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:56.209299 kubelet[2442]: W0509 01:41:56.209192 2442 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 01:41:57.733381 systemd[1]: Reload requested from client PID 2713 ('systemctl') (unit session-11.scope)... May 9 01:41:57.733422 systemd[1]: Reloading... May 9 01:41:57.889012 zram_generator::config[2762]: No configuration found. May 9 01:41:58.020672 kubelet[2442]: W0509 01:41:58.020356 2442 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 01:41:58.074638 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 01:41:58.227605 systemd[1]: Reloading finished in 493 ms. May 9 01:41:58.269328 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:41:58.270118 kubelet[2442]: E0509 01:41:58.269174 2442 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4284-0-0-n-bbb05de7dc.novalocal.183db85605c46f26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-bbb05de7dc.novalocal,UID:ci-4284-0-0-n-bbb05de7dc.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-bbb05de7dc.novalocal,},FirstTimestamp:2025-05-09 01:41:50.09307831 +0000 UTC m=+0.935702782,LastTimestamp:2025-05-09 01:41:50.09307831 +0000 UTC m=+0.935702782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-bbb05de7dc.novalocal,}" May 9 01:41:58.278436 systemd[1]: kubelet.service: Deactivated successfully. May 9 01:41:58.278765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:41:58.278851 systemd[1]: kubelet.service: Consumed 1.632s CPU time, 116M memory peak. May 9 01:41:58.283285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 01:41:58.783857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 01:41:58.796417 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 01:41:59.062655 kubelet[2823]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 01:41:59.062655 kubelet[2823]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 01:41:59.062655 kubelet[2823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 01:41:59.066125 kubelet[2823]: I0509 01:41:59.063045 2823 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 01:41:59.110863 kubelet[2823]: I0509 01:41:59.110771 2823 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 01:41:59.110863 kubelet[2823]: I0509 01:41:59.110835 2823 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 01:41:59.113069 kubelet[2823]: I0509 01:41:59.112723 2823 server.go:927] "Client rotation is on, will bootstrap in background" May 9 01:41:59.118991 kubelet[2823]: I0509 01:41:59.118920 2823 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 01:41:59.127299 kubelet[2823]: I0509 01:41:59.122860 2823 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 01:41:59.146275 kubelet[2823]: I0509 01:41:59.145938 2823 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 01:41:59.147042 kubelet[2823]: I0509 01:41:59.146693 2823 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 01:41:59.147954 kubelet[2823]: I0509 01:41:59.147128 2823 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-bbb05de7dc.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 01:41:59.148287 kubelet[2823]: I0509 01:41:59.148150 2823 topology_manager.go:138] "Creating topology manager with none policy" May 9 01:41:59.148287 kubelet[2823]: I0509 01:41:59.148172 2823 container_manager_linux.go:301] "Creating device plugin manager" May 9 01:41:59.148287 kubelet[2823]: I0509 01:41:59.148246 2823 state_mem.go:36] "Initialized new in-memory state store" May 9 01:41:59.149978 kubelet[2823]: I0509 01:41:59.149788 2823 kubelet.go:400] "Attempting to sync node with API server" May 9 01:41:59.149978 kubelet[2823]: I0509 01:41:59.149834 2823 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 01:41:59.149978 kubelet[2823]: I0509 01:41:59.149866 2823 kubelet.go:312] "Adding apiserver pod source" May 9 01:41:59.149978 kubelet[2823]: I0509 01:41:59.149886 2823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 01:41:59.155074 kubelet[2823]: I0509 01:41:59.154370 2823 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 9 01:41:59.155074 kubelet[2823]: I0509 01:41:59.154561 2823 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 01:41:59.155074 kubelet[2823]: I0509 01:41:59.155025 2823 server.go:1264] "Started kubelet" May 9 01:41:59.158610 kubelet[2823]: I0509 01:41:59.157913 2823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 01:41:59.167362 kubelet[2823]: I0509 01:41:59.167284 2823 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 01:41:59.168769 kubelet[2823]: I0509 01:41:59.168729 2823 server.go:455] "Adding debug handlers to kubelet server" May 9 01:41:59.172106 kubelet[2823]: I0509 01:41:59.170559 2823 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 01:41:59.172106 kubelet[2823]: I0509 01:41:59.170795 2823 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 01:41:59.179583 kubelet[2823]: I0509 01:41:59.178999 2823 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 01:41:59.186491 kubelet[2823]: I0509 01:41:59.184651 2823 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 01:41:59.186491 kubelet[2823]: I0509 01:41:59.185063 2823 reconciler.go:26] "Reconciler: start to sync state" May 9 01:41:59.192562 kubelet[2823]: I0509 01:41:59.192511 2823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 01:41:59.197231 kubelet[2823]: I0509 01:41:59.195138 2823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 01:41:59.197231 kubelet[2823]: I0509 01:41:59.195186 2823 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 01:41:59.197231 kubelet[2823]: I0509 01:41:59.195213 2823 kubelet.go:2337] "Starting kubelet main sync loop" May 9 01:41:59.197231 kubelet[2823]: E0509 01:41:59.196026 2823 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 01:41:59.238424 kubelet[2823]: I0509 01:41:59.238389 2823 factory.go:221] Registration of the systemd container factory successfully May 9 01:41:59.238720 kubelet[2823]: I0509 01:41:59.238694 2823 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 01:41:59.240996 kubelet[2823]: I0509 01:41:59.240917 2823 factory.go:221] Registration of the containerd container factory successfully May 9 01:41:59.255900 kubelet[2823]: E0509 01:41:59.254943 2823 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 01:41:59.284503 kubelet[2823]: I0509 01:41:59.284316 2823 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.297023 kubelet[2823]: E0509 01:41:59.296887 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 01:41:59.325493 kubelet[2823]: I0509 01:41:59.325147 2823 kubelet_node_status.go:112] "Node was previously registered" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.325493 kubelet[2823]: I0509 01:41:59.325251 2823 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.345861 kubelet[2823]: I0509 01:41:59.344951 2823 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 01:41:59.345861 kubelet[2823]: I0509 01:41:59.345058 2823 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 01:41:59.345861 kubelet[2823]: I0509 01:41:59.345083 2823 state_mem.go:36] "Initialized new in-memory state store" May 9 01:41:59.345861 kubelet[2823]: I0509 01:41:59.345264 2823 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 01:41:59.345861 kubelet[2823]: I0509 01:41:59.345282 2823 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 01:41:59.345861 kubelet[2823]: I0509 01:41:59.345304 2823 policy_none.go:49] "None policy: Start" May 9 01:41:59.348352 kubelet[2823]: I0509 01:41:59.347820 2823 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 01:41:59.348352 kubelet[2823]: I0509 01:41:59.347852 2823 state_mem.go:35] "Initializing new in-memory state store" May 9 01:41:59.348352 kubelet[2823]: I0509 01:41:59.348160 2823 state_mem.go:75] "Updated machine memory state" May 9 01:41:59.361740 kubelet[2823]: I0509 01:41:59.361697 2823 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 01:41:59.361948 kubelet[2823]: I0509 01:41:59.361891 2823 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 01:41:59.362069 kubelet[2823]: I0509 01:41:59.362045 2823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 01:41:59.497898 kubelet[2823]: I0509 01:41:59.497433 2823 topology_manager.go:215] "Topology Admit Handler" podUID="6cc20b4385be144e1b2e55e6434c22e3" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.497898 kubelet[2823]: I0509 01:41:59.497585 2823 topology_manager.go:215] "Topology Admit Handler" podUID="80d931ceda2a65aff40a80d00d44fb7d" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.497898 kubelet[2823]: I0509 01:41:59.497693 2823 topology_manager.go:215] "Topology Admit Handler" podUID="00ae8c922f166b57877cf66024788381" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.525530 kubelet[2823]: W0509 01:41:59.525083 2823 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 01:41:59.525530 kubelet[2823]: E0509 01:41:59.525171 2823 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.525530 kubelet[2823]: W0509 01:41:59.525262 2823 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 01:41:59.525530 kubelet[2823]: W0509 01:41:59.525424 2823 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 9 01:41:59.525530 kubelet[2823]: E0509 01:41:59.525458 2823 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.688547 kubelet[2823]: I0509 01:41:59.688193 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6cc20b4385be144e1b2e55e6434c22e3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"6cc20b4385be144e1b2e55e6434c22e3\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.688547 kubelet[2823]: I0509 01:41:59.688255 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80d931ceda2a65aff40a80d00d44fb7d-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"80d931ceda2a65aff40a80d00d44fb7d\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.688547 kubelet[2823]: I0509 01:41:59.688282 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80d931ceda2a65aff40a80d00d44fb7d-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"80d931ceda2a65aff40a80d00d44fb7d\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.688547 kubelet[2823]: I0509 01:41:59.688307 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80d931ceda2a65aff40a80d00d44fb7d-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"80d931ceda2a65aff40a80d00d44fb7d\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.688813 kubelet[2823]: I0509 01:41:59.688329 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6cc20b4385be144e1b2e55e6434c22e3-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"6cc20b4385be144e1b2e55e6434c22e3\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.688813 kubelet[2823]: I0509 01:41:59.688348 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80d931ceda2a65aff40a80d00d44fb7d-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"80d931ceda2a65aff40a80d00d44fb7d\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.688813 kubelet[2823]: I0509 01:41:59.688372 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80d931ceda2a65aff40a80d00d44fb7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"80d931ceda2a65aff40a80d00d44fb7d\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.688813 kubelet[2823]: I0509 01:41:59.688395 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00ae8c922f166b57877cf66024788381-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"00ae8c922f166b57877cf66024788381\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:41:59.688813 kubelet[2823]: I0509 01:41:59.688416 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6cc20b4385be144e1b2e55e6434c22e3-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal\" (UID: \"6cc20b4385be144e1b2e55e6434c22e3\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:42:00.153882 kubelet[2823]: I0509 01:42:00.152185 2823 apiserver.go:52] "Watching apiserver" May 9 01:42:00.193289 kubelet[2823]: I0509 01:42:00.193222 2823 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 01:42:00.257987 kubelet[2823]: I0509 01:42:00.256706 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-n-bbb05de7dc.novalocal" podStartSLOduration=1.256681782 podStartE2EDuration="1.256681782s" podCreationTimestamp="2025-05-09 01:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 01:42:00.256052421 +0000 UTC m=+1.448848785" watchObservedRunningTime="2025-05-09 01:42:00.256681782 +0000 UTC m=+1.449478127" May 9 01:42:00.295040 kubelet[2823]: I0509 01:42:00.294764 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-n-bbb05de7dc.novalocal" podStartSLOduration=4.294742132 podStartE2EDuration="4.294742132s" podCreationTimestamp="2025-05-09 01:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 01:42:00.294474522 +0000 UTC m=+1.487270836" watchObservedRunningTime="2025-05-09 01:42:00.294742132 +0000 UTC m=+1.487538446" May 9 01:42:00.463044 kubelet[2823]: I0509 01:42:00.462098 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-bbb05de7dc.novalocal" podStartSLOduration=2.46205869 podStartE2EDuration="2.46205869s" podCreationTimestamp="2025-05-09 01:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 01:42:00.460053848 +0000 UTC m=+1.652850242" watchObservedRunningTime="2025-05-09 01:42:00.46205869 +0000 UTC m=+1.654855125" May 9 01:42:05.845107 sudo[1758]: pam_unix(sudo:session): session closed for user root May 9 01:42:06.005066 sshd[1757]: Connection closed by 172.24.4.1 port 39180 May 9 01:42:06.006431 sshd-session[1754]: pam_unix(sshd:session): session closed for user core May 9 01:42:06.013849 systemd[1]: sshd@8-172.24.4.153:22-172.24.4.1:39180.service: Deactivated successfully. May 9 01:42:06.020516 systemd[1]: session-11.scope: Deactivated successfully. May 9 01:42:06.021182 systemd[1]: session-11.scope: Consumed 7.266s CPU time, 244.5M memory peak. May 9 01:42:06.026305 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. May 9 01:42:06.030207 systemd-logind[1458]: Removed session 11. May 9 01:42:12.506707 kubelet[2823]: I0509 01:42:12.506524 2823 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 01:42:12.507803 containerd[1483]: time="2025-05-09T01:42:12.507079082Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 01:42:12.511062 kubelet[2823]: I0509 01:42:12.508756 2823 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 01:42:13.216645 kubelet[2823]: I0509 01:42:13.216083 2823 topology_manager.go:215] "Topology Admit Handler" podUID="42b01e14-cdd7-44c1-88ce-78bcb424e575" podNamespace="kube-system" podName="kube-proxy-vnm8v" May 9 01:42:13.231259 systemd[1]: Created slice kubepods-besteffort-pod42b01e14_cdd7_44c1_88ce_78bcb424e575.slice - libcontainer container kubepods-besteffort-pod42b01e14_cdd7_44c1_88ce_78bcb424e575.slice. May 9 01:42:13.373525 kubelet[2823]: I0509 01:42:13.373466 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42b01e14-cdd7-44c1-88ce-78bcb424e575-kube-proxy\") pod \"kube-proxy-vnm8v\" (UID: \"42b01e14-cdd7-44c1-88ce-78bcb424e575\") " pod="kube-system/kube-proxy-vnm8v" May 9 01:42:13.373525 kubelet[2823]: I0509 01:42:13.373504 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42b01e14-cdd7-44c1-88ce-78bcb424e575-xtables-lock\") pod \"kube-proxy-vnm8v\" (UID: \"42b01e14-cdd7-44c1-88ce-78bcb424e575\") " pod="kube-system/kube-proxy-vnm8v" May 9 01:42:13.373525 kubelet[2823]: I0509 01:42:13.373525 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42b01e14-cdd7-44c1-88ce-78bcb424e575-lib-modules\") pod \"kube-proxy-vnm8v\" (UID: \"42b01e14-cdd7-44c1-88ce-78bcb424e575\") " pod="kube-system/kube-proxy-vnm8v" May 9 01:42:13.373757 kubelet[2823]: I0509 01:42:13.373545 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8kd5\" (UniqueName: \"kubernetes.io/projected/42b01e14-cdd7-44c1-88ce-78bcb424e575-kube-api-access-w8kd5\") pod \"kube-proxy-vnm8v\" (UID: \"42b01e14-cdd7-44c1-88ce-78bcb424e575\") " pod="kube-system/kube-proxy-vnm8v" May 9 01:42:13.540407 containerd[1483]: time="2025-05-09T01:42:13.539628242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vnm8v,Uid:42b01e14-cdd7-44c1-88ce-78bcb424e575,Namespace:kube-system,Attempt:0,}" May 9 01:42:13.566045 kubelet[2823]: I0509 01:42:13.565996 2823 topology_manager.go:215] "Topology Admit Handler" podUID="9fa3472a-d8fb-4397-bf76-5ae43b953a75" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-nnmrj" May 9 01:42:13.579219 kubelet[2823]: I0509 01:42:13.575208 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9fa3472a-d8fb-4397-bf76-5ae43b953a75-var-lib-calico\") pod \"tigera-operator-797db67f8-nnmrj\" (UID: \"9fa3472a-d8fb-4397-bf76-5ae43b953a75\") " pod="tigera-operator/tigera-operator-797db67f8-nnmrj" May 9 01:42:13.579219 kubelet[2823]: I0509 01:42:13.575256 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsfkk\" (UniqueName: \"kubernetes.io/projected/9fa3472a-d8fb-4397-bf76-5ae43b953a75-kube-api-access-lsfkk\") pod \"tigera-operator-797db67f8-nnmrj\" (UID: \"9fa3472a-d8fb-4397-bf76-5ae43b953a75\") " pod="tigera-operator/tigera-operator-797db67f8-nnmrj" May 9 01:42:13.589405 systemd[1]: Created slice kubepods-besteffort-pod9fa3472a_d8fb_4397_bf76_5ae43b953a75.slice - libcontainer container kubepods-besteffort-pod9fa3472a_d8fb_4397_bf76_5ae43b953a75.slice. May 9 01:42:13.602413 containerd[1483]: time="2025-05-09T01:42:13.602238937Z" level=info msg="connecting to shim 1f24a95f222ff7d4179b0ae674283cf72ed0c46b1afb4dd776c2360710e16c9b" address="unix:///run/containerd/s/0287cdee3a6a9077f0d2d6f17356da84400f1c0b44dbe50ea688332cfc262896" namespace=k8s.io protocol=ttrpc version=3 May 9 01:42:13.639139 systemd[1]: Started cri-containerd-1f24a95f222ff7d4179b0ae674283cf72ed0c46b1afb4dd776c2360710e16c9b.scope - libcontainer container 1f24a95f222ff7d4179b0ae674283cf72ed0c46b1afb4dd776c2360710e16c9b. May 9 01:42:13.679913 containerd[1483]: time="2025-05-09T01:42:13.679854644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vnm8v,Uid:42b01e14-cdd7-44c1-88ce-78bcb424e575,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f24a95f222ff7d4179b0ae674283cf72ed0c46b1afb4dd776c2360710e16c9b\"" May 9 01:42:13.690313 containerd[1483]: time="2025-05-09T01:42:13.690228430Z" level=info msg="CreateContainer within sandbox \"1f24a95f222ff7d4179b0ae674283cf72ed0c46b1afb4dd776c2360710e16c9b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 01:42:13.706110 containerd[1483]: time="2025-05-09T01:42:13.705996266Z" level=info msg="Container ff14831cadb091a7fe1867415a56a32a81422c6133905327dccac1dfdec2883f: CDI devices from CRI Config.CDIDevices: []" May 9 01:42:13.721436 containerd[1483]: time="2025-05-09T01:42:13.721377401Z" level=info msg="CreateContainer within sandbox \"1f24a95f222ff7d4179b0ae674283cf72ed0c46b1afb4dd776c2360710e16c9b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ff14831cadb091a7fe1867415a56a32a81422c6133905327dccac1dfdec2883f\"" May 9 01:42:13.724138 containerd[1483]: time="2025-05-09T01:42:13.722335557Z" level=info msg="StartContainer for \"ff14831cadb091a7fe1867415a56a32a81422c6133905327dccac1dfdec2883f\"" May 9 01:42:13.724138 containerd[1483]: time="2025-05-09T01:42:13.723818036Z" level=info msg="connecting to shim ff14831cadb091a7fe1867415a56a32a81422c6133905327dccac1dfdec2883f" address="unix:///run/containerd/s/0287cdee3a6a9077f0d2d6f17356da84400f1c0b44dbe50ea688332cfc262896" protocol=ttrpc version=3 May 9 01:42:13.750122 systemd[1]: Started cri-containerd-ff14831cadb091a7fe1867415a56a32a81422c6133905327dccac1dfdec2883f.scope - libcontainer container ff14831cadb091a7fe1867415a56a32a81422c6133905327dccac1dfdec2883f. May 9 01:42:13.804173 containerd[1483]: time="2025-05-09T01:42:13.804064783Z" level=info msg="StartContainer for \"ff14831cadb091a7fe1867415a56a32a81422c6133905327dccac1dfdec2883f\" returns successfully" May 9 01:42:13.894233 containerd[1483]: time="2025-05-09T01:42:13.894162926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-nnmrj,Uid:9fa3472a-d8fb-4397-bf76-5ae43b953a75,Namespace:tigera-operator,Attempt:0,}" May 9 01:42:13.922626 containerd[1483]: time="2025-05-09T01:42:13.922536515Z" level=info msg="connecting to shim 11eddf98cfd2c924a4dfdff73c06f2f00dda6cce1041a55ae55cbaa576d5e265" address="unix:///run/containerd/s/358c1699fac94448e064976db9f69dc73b2e857fee5c6f2cb1bca7d450826928" namespace=k8s.io protocol=ttrpc version=3 May 9 01:42:13.959124 systemd[1]: Started cri-containerd-11eddf98cfd2c924a4dfdff73c06f2f00dda6cce1041a55ae55cbaa576d5e265.scope - libcontainer container 11eddf98cfd2c924a4dfdff73c06f2f00dda6cce1041a55ae55cbaa576d5e265. May 9 01:42:14.015665 containerd[1483]: time="2025-05-09T01:42:14.015582791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-nnmrj,Uid:9fa3472a-d8fb-4397-bf76-5ae43b953a75,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"11eddf98cfd2c924a4dfdff73c06f2f00dda6cce1041a55ae55cbaa576d5e265\"" May 9 01:42:14.019658 containerd[1483]: time="2025-05-09T01:42:14.019020626Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 9 01:42:15.493856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4263216086.mount: Deactivated successfully. May 9 01:42:16.287064 containerd[1483]: time="2025-05-09T01:42:16.287008095Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:16.288750 containerd[1483]: time="2025-05-09T01:42:16.288478413Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 9 01:42:16.291254 containerd[1483]: time="2025-05-09T01:42:16.289931430Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:16.293012 containerd[1483]: time="2025-05-09T01:42:16.292453028Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:16.293355 containerd[1483]: time="2025-05-09T01:42:16.293202121Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.274147757s" May 9 01:42:16.293355 containerd[1483]: time="2025-05-09T01:42:16.293233928Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 9 01:42:16.297039 containerd[1483]: time="2025-05-09T01:42:16.296570537Z" level=info msg="CreateContainer within sandbox \"11eddf98cfd2c924a4dfdff73c06f2f00dda6cce1041a55ae55cbaa576d5e265\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 9 01:42:16.314916 containerd[1483]: time="2025-05-09T01:42:16.314285431Z" level=info msg="Container e700fa577cd9229c5070b3a198b48b6834856b573fdad72a54a71de5ec2b4c2a: CDI devices from CRI Config.CDIDevices: []" May 9 01:42:16.320264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199171240.mount: Deactivated successfully. May 9 01:42:16.327539 containerd[1483]: time="2025-05-09T01:42:16.327491929Z" level=info msg="CreateContainer within sandbox \"11eddf98cfd2c924a4dfdff73c06f2f00dda6cce1041a55ae55cbaa576d5e265\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e700fa577cd9229c5070b3a198b48b6834856b573fdad72a54a71de5ec2b4c2a\"" May 9 01:42:16.328868 containerd[1483]: time="2025-05-09T01:42:16.328234882Z" level=info msg="StartContainer for \"e700fa577cd9229c5070b3a198b48b6834856b573fdad72a54a71de5ec2b4c2a\"" May 9 01:42:16.329107 containerd[1483]: time="2025-05-09T01:42:16.329076640Z" level=info msg="connecting to shim e700fa577cd9229c5070b3a198b48b6834856b573fdad72a54a71de5ec2b4c2a" address="unix:///run/containerd/s/358c1699fac94448e064976db9f69dc73b2e857fee5c6f2cb1bca7d450826928" protocol=ttrpc version=3 May 9 01:42:16.349133 systemd[1]: Started cri-containerd-e700fa577cd9229c5070b3a198b48b6834856b573fdad72a54a71de5ec2b4c2a.scope - libcontainer container e700fa577cd9229c5070b3a198b48b6834856b573fdad72a54a71de5ec2b4c2a. May 9 01:42:16.388042 containerd[1483]: time="2025-05-09T01:42:16.387879863Z" level=info msg="StartContainer for \"e700fa577cd9229c5070b3a198b48b6834856b573fdad72a54a71de5ec2b4c2a\" returns successfully" May 9 01:42:17.394344 kubelet[2823]: I0509 01:42:17.394215 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vnm8v" podStartSLOduration=4.394180206 podStartE2EDuration="4.394180206s" podCreationTimestamp="2025-05-09 01:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 01:42:14.391607703 +0000 UTC m=+15.584404067" watchObservedRunningTime="2025-05-09 01:42:17.394180206 +0000 UTC m=+18.586976570" May 9 01:42:19.231568 kubelet[2823]: I0509 01:42:19.231494 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-nnmrj" podStartSLOduration=3.955375211 podStartE2EDuration="6.231478679s" podCreationTimestamp="2025-05-09 01:42:13 +0000 UTC" firstStartedPulling="2025-05-09 01:42:14.018275008 +0000 UTC m=+15.211071332" lastFinishedPulling="2025-05-09 01:42:16.294378486 +0000 UTC m=+17.487174800" observedRunningTime="2025-05-09 01:42:17.394669559 +0000 UTC m=+18.587465924" watchObservedRunningTime="2025-05-09 01:42:19.231478679 +0000 UTC m=+20.424275003" May 9 01:42:20.100843 kubelet[2823]: I0509 01:42:20.098643 2823 topology_manager.go:215] "Topology Admit Handler" podUID="2dd0a15a-d381-46ec-98f1-6c673000c988" podNamespace="calico-system" podName="calico-typha-7b6897c85-tnk8d" May 9 01:42:20.111243 systemd[1]: Created slice kubepods-besteffort-pod2dd0a15a_d381_46ec_98f1_6c673000c988.slice - libcontainer container kubepods-besteffort-pod2dd0a15a_d381_46ec_98f1_6c673000c988.slice. May 9 01:42:20.225037 kubelet[2823]: I0509 01:42:20.224998 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2dd0a15a-d381-46ec-98f1-6c673000c988-typha-certs\") pod \"calico-typha-7b6897c85-tnk8d\" (UID: \"2dd0a15a-d381-46ec-98f1-6c673000c988\") " pod="calico-system/calico-typha-7b6897c85-tnk8d" May 9 01:42:20.226378 kubelet[2823]: I0509 01:42:20.226133 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dd0a15a-d381-46ec-98f1-6c673000c988-tigera-ca-bundle\") pod \"calico-typha-7b6897c85-tnk8d\" (UID: \"2dd0a15a-d381-46ec-98f1-6c673000c988\") " pod="calico-system/calico-typha-7b6897c85-tnk8d" May 9 01:42:20.226460 kubelet[2823]: I0509 01:42:20.226404 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4zkv\" (UniqueName: \"kubernetes.io/projected/2dd0a15a-d381-46ec-98f1-6c673000c988-kube-api-access-r4zkv\") pod \"calico-typha-7b6897c85-tnk8d\" (UID: \"2dd0a15a-d381-46ec-98f1-6c673000c988\") " pod="calico-system/calico-typha-7b6897c85-tnk8d" May 9 01:42:20.236012 kubelet[2823]: I0509 01:42:20.235949 2823 topology_manager.go:215] "Topology Admit Handler" podUID="7caefea7-09f0-44d9-9e7c-6eb1f7146100" podNamespace="calico-system" podName="calico-node-df7bh" May 9 01:42:20.245494 systemd[1]: Created slice kubepods-besteffort-pod7caefea7_09f0_44d9_9e7c_6eb1f7146100.slice - libcontainer container kubepods-besteffort-pod7caefea7_09f0_44d9_9e7c_6eb1f7146100.slice. May 9 01:42:20.378138 kubelet[2823]: I0509 01:42:20.378006 2823 topology_manager.go:215] "Topology Admit Handler" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" podNamespace="calico-system" podName="csi-node-driver-s7ptc" May 9 01:42:20.379278 kubelet[2823]: E0509 01:42:20.379090 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7ptc" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" May 9 01:42:20.423140 containerd[1483]: time="2025-05-09T01:42:20.422957195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b6897c85-tnk8d,Uid:2dd0a15a-d381-46ec-98f1-6c673000c988,Namespace:calico-system,Attempt:0,}" May 9 01:42:20.429004 kubelet[2823]: I0509 01:42:20.428484 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7caefea7-09f0-44d9-9e7c-6eb1f7146100-var-lib-calico\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.429004 kubelet[2823]: I0509 01:42:20.428543 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7caefea7-09f0-44d9-9e7c-6eb1f7146100-cni-bin-dir\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.429004 kubelet[2823]: I0509 01:42:20.428567 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7caefea7-09f0-44d9-9e7c-6eb1f7146100-node-certs\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.429004 kubelet[2823]: I0509 01:42:20.428586 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7caefea7-09f0-44d9-9e7c-6eb1f7146100-flexvol-driver-host\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.429004 kubelet[2823]: I0509 01:42:20.428613 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7caefea7-09f0-44d9-9e7c-6eb1f7146100-var-run-calico\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.429306 kubelet[2823]: I0509 01:42:20.428660 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7caefea7-09f0-44d9-9e7c-6eb1f7146100-cni-net-dir\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.429306 kubelet[2823]: I0509 01:42:20.428718 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7caefea7-09f0-44d9-9e7c-6eb1f7146100-tigera-ca-bundle\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.429306 kubelet[2823]: I0509 01:42:20.428746 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7caefea7-09f0-44d9-9e7c-6eb1f7146100-xtables-lock\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.429306 kubelet[2823]: I0509 01:42:20.428764 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7caefea7-09f0-44d9-9e7c-6eb1f7146100-lib-modules\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.429306 kubelet[2823]: I0509 01:42:20.428802 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7caefea7-09f0-44d9-9e7c-6eb1f7146100-policysync\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.429457 kubelet[2823]: I0509 01:42:20.428820 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7caefea7-09f0-44d9-9e7c-6eb1f7146100-cni-log-dir\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.429457 kubelet[2823]: I0509 01:42:20.428839 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82kfv\" (UniqueName: \"kubernetes.io/projected/7caefea7-09f0-44d9-9e7c-6eb1f7146100-kube-api-access-82kfv\") pod \"calico-node-df7bh\" (UID: \"7caefea7-09f0-44d9-9e7c-6eb1f7146100\") " pod="calico-system/calico-node-df7bh" May 9 01:42:20.479502 containerd[1483]: time="2025-05-09T01:42:20.479441447Z" level=info msg="connecting to shim c0a5c7b5955bd716536af2f067c131cc05e2432216e78d72a304946f19345c3e" address="unix:///run/containerd/s/7e22de952560d8d5050ced5c4cddc824cfd960ccaf5c32102ab9e243ce8c45c6" namespace=k8s.io protocol=ttrpc version=3 May 9 01:42:20.534027 kubelet[2823]: I0509 01:42:20.529539 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12706b19-c70f-4b8e-b9f1-5ea62d04108c-kubelet-dir\") pod \"csi-node-driver-s7ptc\" (UID: \"12706b19-c70f-4b8e-b9f1-5ea62d04108c\") " pod="calico-system/csi-node-driver-s7ptc" May 9 01:42:20.534027 kubelet[2823]: I0509 01:42:20.529592 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnj7n\" (UniqueName: \"kubernetes.io/projected/12706b19-c70f-4b8e-b9f1-5ea62d04108c-kube-api-access-wnj7n\") pod \"csi-node-driver-s7ptc\" (UID: \"12706b19-c70f-4b8e-b9f1-5ea62d04108c\") " pod="calico-system/csi-node-driver-s7ptc" May 9 01:42:20.534027 kubelet[2823]: I0509 01:42:20.529625 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12706b19-c70f-4b8e-b9f1-5ea62d04108c-socket-dir\") pod \"csi-node-driver-s7ptc\" (UID: \"12706b19-c70f-4b8e-b9f1-5ea62d04108c\") " pod="calico-system/csi-node-driver-s7ptc" May 9 01:42:20.534027 kubelet[2823]: I0509 01:42:20.529646 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12706b19-c70f-4b8e-b9f1-5ea62d04108c-registration-dir\") pod \"csi-node-driver-s7ptc\" (UID: \"12706b19-c70f-4b8e-b9f1-5ea62d04108c\") " pod="calico-system/csi-node-driver-s7ptc" May 9 01:42:20.534027 kubelet[2823]: I0509 01:42:20.533132 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/12706b19-c70f-4b8e-b9f1-5ea62d04108c-varrun\") pod \"csi-node-driver-s7ptc\" (UID: \"12706b19-c70f-4b8e-b9f1-5ea62d04108c\") " pod="calico-system/csi-node-driver-s7ptc" May 9 01:42:20.577659 kubelet[2823]: E0509 01:42:20.558100 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.577659 kubelet[2823]: W0509 01:42:20.558125 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.577659 kubelet[2823]: E0509 01:42:20.558168 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.611792 systemd[1]: Started cri-containerd-c0a5c7b5955bd716536af2f067c131cc05e2432216e78d72a304946f19345c3e.scope - libcontainer container c0a5c7b5955bd716536af2f067c131cc05e2432216e78d72a304946f19345c3e. May 9 01:42:20.631287 kubelet[2823]: E0509 01:42:20.630664 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.631287 kubelet[2823]: W0509 01:42:20.630690 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.631287 kubelet[2823]: E0509 01:42:20.630710 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.634361 kubelet[2823]: E0509 01:42:20.634329 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.634361 kubelet[2823]: W0509 01:42:20.634351 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.634474 kubelet[2823]: E0509 01:42:20.634369 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.635368 kubelet[2823]: E0509 01:42:20.635276 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.635368 kubelet[2823]: W0509 01:42:20.635293 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.635904 kubelet[2823]: E0509 01:42:20.635878 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.635904 kubelet[2823]: W0509 01:42:20.635897 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.635994 kubelet[2823]: E0509 01:42:20.635925 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.636028 kubelet[2823]: E0509 01:42:20.635995 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.637084 kubelet[2823]: E0509 01:42:20.637059 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.637134 kubelet[2823]: W0509 01:42:20.637076 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.637134 kubelet[2823]: E0509 01:42:20.637118 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.637375 kubelet[2823]: E0509 01:42:20.637353 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.637375 kubelet[2823]: W0509 01:42:20.637369 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.637472 kubelet[2823]: E0509 01:42:20.637448 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.637681 kubelet[2823]: E0509 01:42:20.637658 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.637681 kubelet[2823]: W0509 01:42:20.637673 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.637998 kubelet[2823]: E0509 01:42:20.637973 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.638531 kubelet[2823]: E0509 01:42:20.638505 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.638531 kubelet[2823]: W0509 01:42:20.638520 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.638604 kubelet[2823]: E0509 01:42:20.638569 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.638942 kubelet[2823]: E0509 01:42:20.638917 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.638942 kubelet[2823]: W0509 01:42:20.638933 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.639034 kubelet[2823]: E0509 01:42:20.638967 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.640043 kubelet[2823]: E0509 01:42:20.640021 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.640043 kubelet[2823]: W0509 01:42:20.640037 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.640126 kubelet[2823]: E0509 01:42:20.640055 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.640353 kubelet[2823]: E0509 01:42:20.640331 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.640353 kubelet[2823]: W0509 01:42:20.640346 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.640421 kubelet[2823]: E0509 01:42:20.640380 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.640737 kubelet[2823]: E0509 01:42:20.640702 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.640737 kubelet[2823]: W0509 01:42:20.640728 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.641099 kubelet[2823]: E0509 01:42:20.641070 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.642029 kubelet[2823]: E0509 01:42:20.641266 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.642029 kubelet[2823]: W0509 01:42:20.641281 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.643338 kubelet[2823]: E0509 01:42:20.643274 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.644207 kubelet[2823]: E0509 01:42:20.644126 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.644207 kubelet[2823]: W0509 01:42:20.644171 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.644510 kubelet[2823]: E0509 01:42:20.644435 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.648062 kubelet[2823]: E0509 01:42:20.645391 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.648062 kubelet[2823]: W0509 01:42:20.645417 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.648062 kubelet[2823]: E0509 01:42:20.647098 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.648062 kubelet[2823]: E0509 01:42:20.647406 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.648062 kubelet[2823]: W0509 01:42:20.647420 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.648062 kubelet[2823]: E0509 01:42:20.647518 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.648062 kubelet[2823]: E0509 01:42:20.647891 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.648062 kubelet[2823]: W0509 01:42:20.647901 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.648062 kubelet[2823]: E0509 01:42:20.648046 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.648418 kubelet[2823]: E0509 01:42:20.648205 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.648418 kubelet[2823]: W0509 01:42:20.648214 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.648517 kubelet[2823]: E0509 01:42:20.648495 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.649191 kubelet[2823]: E0509 01:42:20.649170 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.649191 kubelet[2823]: W0509 01:42:20.649186 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.649294 kubelet[2823]: E0509 01:42:20.649278 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.649483 kubelet[2823]: E0509 01:42:20.649462 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.649483 kubelet[2823]: W0509 01:42:20.649480 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.649825 kubelet[2823]: E0509 01:42:20.649786 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.650119 kubelet[2823]: E0509 01:42:20.650102 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.650119 kubelet[2823]: W0509 01:42:20.650115 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.650674 kubelet[2823]: E0509 01:42:20.650648 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.650846 kubelet[2823]: E0509 01:42:20.650822 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.651116 kubelet[2823]: W0509 01:42:20.650841 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.651900 kubelet[2823]: E0509 01:42:20.651867 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.652744 kubelet[2823]: E0509 01:42:20.652714 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.652744 kubelet[2823]: W0509 01:42:20.652737 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.652848 kubelet[2823]: E0509 01:42:20.652765 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.653097 kubelet[2823]: E0509 01:42:20.653075 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.653097 kubelet[2823]: W0509 01:42:20.653091 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.653353 kubelet[2823]: E0509 01:42:20.653325 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.653848 kubelet[2823]: E0509 01:42:20.653819 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.653848 kubelet[2823]: W0509 01:42:20.653837 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.654024 kubelet[2823]: E0509 01:42:20.653852 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.654590 kubelet[2823]: E0509 01:42:20.654552 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.654590 kubelet[2823]: W0509 01:42:20.654577 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.654590 kubelet[2823]: E0509 01:42:20.654591 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.702445 kubelet[2823]: E0509 01:42:20.702405 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:20.702445 kubelet[2823]: W0509 01:42:20.702429 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:20.702445 kubelet[2823]: E0509 01:42:20.702448 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:20.852837 containerd[1483]: time="2025-05-09T01:42:20.852788104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-df7bh,Uid:7caefea7-09f0-44d9-9e7c-6eb1f7146100,Namespace:calico-system,Attempt:0,}" May 9 01:42:20.914525 containerd[1483]: time="2025-05-09T01:42:20.912209289Z" level=info msg="connecting to shim 8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e" address="unix:///run/containerd/s/a3e55e404d4d5b09852083c1246aa8404bcf74b3f0fa928f5642ef4f51d51d4d" namespace=k8s.io protocol=ttrpc version=3 May 9 01:42:20.971561 systemd[1]: Started cri-containerd-8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e.scope - libcontainer container 8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e. May 9 01:42:21.025050 containerd[1483]: time="2025-05-09T01:42:21.024998050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b6897c85-tnk8d,Uid:2dd0a15a-d381-46ec-98f1-6c673000c988,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0a5c7b5955bd716536af2f067c131cc05e2432216e78d72a304946f19345c3e\"" May 9 01:42:21.028490 containerd[1483]: time="2025-05-09T01:42:21.028073455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 9 01:42:21.053085 containerd[1483]: time="2025-05-09T01:42:21.052776588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-df7bh,Uid:7caefea7-09f0-44d9-9e7c-6eb1f7146100,Namespace:calico-system,Attempt:0,} returns sandbox id \"8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e\"" May 9 01:42:22.198039 kubelet[2823]: E0509 01:42:22.197371 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7ptc" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" May 9 01:42:24.197278 kubelet[2823]: E0509 01:42:24.196807 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7ptc" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" May 9 01:42:24.803889 containerd[1483]: time="2025-05-09T01:42:24.803844199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:24.805137 containerd[1483]: time="2025-05-09T01:42:24.805092539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 9 01:42:24.806076 containerd[1483]: time="2025-05-09T01:42:24.806027741Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:24.809678 containerd[1483]: time="2025-05-09T01:42:24.808951658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:24.809939 containerd[1483]: time="2025-05-09T01:42:24.809913276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 3.781796222s" May 9 01:42:24.810125 containerd[1483]: time="2025-05-09T01:42:24.810101238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 9 01:42:24.811317 containerd[1483]: time="2025-05-09T01:42:24.811284280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 9 01:42:24.827451 containerd[1483]: time="2025-05-09T01:42:24.827352260Z" level=info msg="CreateContainer within sandbox \"c0a5c7b5955bd716536af2f067c131cc05e2432216e78d72a304946f19345c3e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 9 01:42:24.840413 containerd[1483]: time="2025-05-09T01:42:24.840366927Z" level=info msg="Container 107cb795290d84656d5f5c9349924c94a556f3203c1469ae88e1b332d9e48d49: CDI devices from CRI Config.CDIDevices: []" May 9 01:42:24.854632 containerd[1483]: time="2025-05-09T01:42:24.854584012Z" level=info msg="CreateContainer within sandbox \"c0a5c7b5955bd716536af2f067c131cc05e2432216e78d72a304946f19345c3e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"107cb795290d84656d5f5c9349924c94a556f3203c1469ae88e1b332d9e48d49\"" May 9 01:42:24.856590 containerd[1483]: time="2025-05-09T01:42:24.856171359Z" level=info msg="StartContainer for \"107cb795290d84656d5f5c9349924c94a556f3203c1469ae88e1b332d9e48d49\"" May 9 01:42:24.858173 containerd[1483]: time="2025-05-09T01:42:24.858148765Z" level=info msg="connecting to shim 107cb795290d84656d5f5c9349924c94a556f3203c1469ae88e1b332d9e48d49" address="unix:///run/containerd/s/7e22de952560d8d5050ced5c4cddc824cfd960ccaf5c32102ab9e243ce8c45c6" protocol=ttrpc version=3 May 9 01:42:24.889137 systemd[1]: Started cri-containerd-107cb795290d84656d5f5c9349924c94a556f3203c1469ae88e1b332d9e48d49.scope - libcontainer container 107cb795290d84656d5f5c9349924c94a556f3203c1469ae88e1b332d9e48d49. May 9 01:42:24.981982 containerd[1483]: time="2025-05-09T01:42:24.981909209Z" level=info msg="StartContainer for \"107cb795290d84656d5f5c9349924c94a556f3203c1469ae88e1b332d9e48d49\" returns successfully" May 9 01:42:25.476871 kubelet[2823]: E0509 01:42:25.476831 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.476871 kubelet[2823]: W0509 01:42:25.476853 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.476871 kubelet[2823]: E0509 01:42:25.476894 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.478194 kubelet[2823]: E0509 01:42:25.477094 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.478194 kubelet[2823]: W0509 01:42:25.477103 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.478194 kubelet[2823]: E0509 01:42:25.477113 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.478194 kubelet[2823]: E0509 01:42:25.477287 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.478194 kubelet[2823]: W0509 01:42:25.477297 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.478194 kubelet[2823]: E0509 01:42:25.477306 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.478194 kubelet[2823]: E0509 01:42:25.477473 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.478194 kubelet[2823]: W0509 01:42:25.477481 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.478194 kubelet[2823]: E0509 01:42:25.477492 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.478194 kubelet[2823]: E0509 01:42:25.477670 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.479943 kubelet[2823]: W0509 01:42:25.477679 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.479943 kubelet[2823]: E0509 01:42:25.477687 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.479943 kubelet[2823]: E0509 01:42:25.477847 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.479943 kubelet[2823]: W0509 01:42:25.477855 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.479943 kubelet[2823]: E0509 01:42:25.477863 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.479943 kubelet[2823]: E0509 01:42:25.478043 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.479943 kubelet[2823]: W0509 01:42:25.478051 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.479943 kubelet[2823]: E0509 01:42:25.478059 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.479943 kubelet[2823]: E0509 01:42:25.478234 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.479943 kubelet[2823]: W0509 01:42:25.478243 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.481155 kubelet[2823]: E0509 01:42:25.478252 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.481155 kubelet[2823]: E0509 01:42:25.478425 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.481155 kubelet[2823]: W0509 01:42:25.478435 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.481155 kubelet[2823]: E0509 01:42:25.478443 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.481155 kubelet[2823]: E0509 01:42:25.478615 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.481155 kubelet[2823]: W0509 01:42:25.478624 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.481155 kubelet[2823]: E0509 01:42:25.478633 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.481155 kubelet[2823]: E0509 01:42:25.478790 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.481155 kubelet[2823]: W0509 01:42:25.478798 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.481155 kubelet[2823]: E0509 01:42:25.478807 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.481862 kubelet[2823]: E0509 01:42:25.479011 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.481862 kubelet[2823]: W0509 01:42:25.479020 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.481862 kubelet[2823]: E0509 01:42:25.479029 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.481862 kubelet[2823]: E0509 01:42:25.479227 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.481862 kubelet[2823]: W0509 01:42:25.479237 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.481862 kubelet[2823]: E0509 01:42:25.479246 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.481862 kubelet[2823]: E0509 01:42:25.479421 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.481862 kubelet[2823]: W0509 01:42:25.479432 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.481862 kubelet[2823]: E0509 01:42:25.479440 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.481862 kubelet[2823]: E0509 01:42:25.479602 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.483541 kubelet[2823]: W0509 01:42:25.479611 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.483541 kubelet[2823]: E0509 01:42:25.479619 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.483541 kubelet[2823]: E0509 01:42:25.482054 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.483541 kubelet[2823]: W0509 01:42:25.482065 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.483541 kubelet[2823]: E0509 01:42:25.482089 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.483541 kubelet[2823]: E0509 01:42:25.482280 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.483541 kubelet[2823]: W0509 01:42:25.482290 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.483541 kubelet[2823]: E0509 01:42:25.482299 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.483541 kubelet[2823]: E0509 01:42:25.482654 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.483541 kubelet[2823]: W0509 01:42:25.482694 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.484248 kubelet[2823]: E0509 01:42:25.482748 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.484248 kubelet[2823]: E0509 01:42:25.484045 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.484248 kubelet[2823]: W0509 01:42:25.484058 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.484248 kubelet[2823]: E0509 01:42:25.484073 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.484248 kubelet[2823]: E0509 01:42:25.484247 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.484248 kubelet[2823]: W0509 01:42:25.484256 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.484709 kubelet[2823]: E0509 01:42:25.484342 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.484709 kubelet[2823]: E0509 01:42:25.484472 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.484709 kubelet[2823]: W0509 01:42:25.484481 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.484709 kubelet[2823]: E0509 01:42:25.484568 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.484709 kubelet[2823]: E0509 01:42:25.484692 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.484709 kubelet[2823]: W0509 01:42:25.484700 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.485143 kubelet[2823]: E0509 01:42:25.484761 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.485143 kubelet[2823]: E0509 01:42:25.484857 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.485143 kubelet[2823]: W0509 01:42:25.484866 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.485143 kubelet[2823]: E0509 01:42:25.484877 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.485409 kubelet[2823]: E0509 01:42:25.485208 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.485409 kubelet[2823]: W0509 01:42:25.485218 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.485409 kubelet[2823]: E0509 01:42:25.485227 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.485606 kubelet[2823]: E0509 01:42:25.485551 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.485606 kubelet[2823]: W0509 01:42:25.485560 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.485731 kubelet[2823]: E0509 01:42:25.485571 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.485731 kubelet[2823]: E0509 01:42:25.485729 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.485842 kubelet[2823]: W0509 01:42:25.485738 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.485842 kubelet[2823]: E0509 01:42:25.485760 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.486019 kubelet[2823]: E0509 01:42:25.485916 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.486019 kubelet[2823]: W0509 01:42:25.485926 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.486019 kubelet[2823]: E0509 01:42:25.485941 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.486222 kubelet[2823]: E0509 01:42:25.486181 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.486222 kubelet[2823]: W0509 01:42:25.486191 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.486222 kubelet[2823]: E0509 01:42:25.486211 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.486465 kubelet[2823]: E0509 01:42:25.486427 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.486465 kubelet[2823]: W0509 01:42:25.486444 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.486465 kubelet[2823]: E0509 01:42:25.486454 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.486800 kubelet[2823]: E0509 01:42:25.486753 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.486800 kubelet[2823]: W0509 01:42:25.486770 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.486800 kubelet[2823]: E0509 01:42:25.486803 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.487129 kubelet[2823]: E0509 01:42:25.487063 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.487129 kubelet[2823]: W0509 01:42:25.487074 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.487129 kubelet[2823]: E0509 01:42:25.487091 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.487486 kubelet[2823]: E0509 01:42:25.487460 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.487486 kubelet[2823]: W0509 01:42:25.487474 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.487486 kubelet[2823]: E0509 01:42:25.487483 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:25.487865 kubelet[2823]: E0509 01:42:25.487772 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:25.487865 kubelet[2823]: W0509 01:42:25.487789 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:25.487865 kubelet[2823]: E0509 01:42:25.487799 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.196131 kubelet[2823]: E0509 01:42:26.195606 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7ptc" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" May 9 01:42:26.411238 kubelet[2823]: I0509 01:42:26.411124 2823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 01:42:26.488536 kubelet[2823]: E0509 01:42:26.488083 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.488536 kubelet[2823]: W0509 01:42:26.488122 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.488536 kubelet[2823]: E0509 01:42:26.488155 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.490139 kubelet[2823]: E0509 01:42:26.488570 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.490139 kubelet[2823]: W0509 01:42:26.488593 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.490139 kubelet[2823]: E0509 01:42:26.488614 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.490139 kubelet[2823]: E0509 01:42:26.489199 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.490139 kubelet[2823]: W0509 01:42:26.489222 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.490139 kubelet[2823]: E0509 01:42:26.489245 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.490139 kubelet[2823]: E0509 01:42:26.489602 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.490139 kubelet[2823]: W0509 01:42:26.489622 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.490139 kubelet[2823]: E0509 01:42:26.489643 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.490139 kubelet[2823]: E0509 01:42:26.490026 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.491445 kubelet[2823]: W0509 01:42:26.490048 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.491445 kubelet[2823]: E0509 01:42:26.490069 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.491445 kubelet[2823]: E0509 01:42:26.490380 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.491445 kubelet[2823]: W0509 01:42:26.490400 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.491445 kubelet[2823]: E0509 01:42:26.490423 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.491445 kubelet[2823]: E0509 01:42:26.490733 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.491445 kubelet[2823]: W0509 01:42:26.490754 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.491445 kubelet[2823]: E0509 01:42:26.490774 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.491445 kubelet[2823]: E0509 01:42:26.491142 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.491445 kubelet[2823]: W0509 01:42:26.491191 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.492568 kubelet[2823]: E0509 01:42:26.491213 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.492568 kubelet[2823]: E0509 01:42:26.491620 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.492568 kubelet[2823]: W0509 01:42:26.491642 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.492568 kubelet[2823]: E0509 01:42:26.491663 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.492568 kubelet[2823]: E0509 01:42:26.492002 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.492568 kubelet[2823]: W0509 01:42:26.492024 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.492568 kubelet[2823]: E0509 01:42:26.492046 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.492568 kubelet[2823]: E0509 01:42:26.492355 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.492568 kubelet[2823]: W0509 01:42:26.492374 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.492568 kubelet[2823]: E0509 01:42:26.492393 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.493232 kubelet[2823]: E0509 01:42:26.492701 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.493232 kubelet[2823]: W0509 01:42:26.492723 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.493232 kubelet[2823]: E0509 01:42:26.492743 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.493568 kubelet[2823]: E0509 01:42:26.493331 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.493568 kubelet[2823]: W0509 01:42:26.493353 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.493568 kubelet[2823]: E0509 01:42:26.493374 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.493898 kubelet[2823]: E0509 01:42:26.493706 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.493898 kubelet[2823]: W0509 01:42:26.493727 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.493898 kubelet[2823]: E0509 01:42:26.493747 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.494160 kubelet[2823]: E0509 01:42:26.494123 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.494160 kubelet[2823]: W0509 01:42:26.494144 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.494477 kubelet[2823]: E0509 01:42:26.494165 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.495046 kubelet[2823]: E0509 01:42:26.494720 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.495046 kubelet[2823]: W0509 01:42:26.494764 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.495046 kubelet[2823]: E0509 01:42:26.494786 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.495347 kubelet[2823]: E0509 01:42:26.495282 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.495347 kubelet[2823]: W0509 01:42:26.495304 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.495613 kubelet[2823]: E0509 01:42:26.495358 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.495852 kubelet[2823]: E0509 01:42:26.495806 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.495852 kubelet[2823]: W0509 01:42:26.495838 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.496203 kubelet[2823]: E0509 01:42:26.495871 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.496302 kubelet[2823]: E0509 01:42:26.496278 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.496302 kubelet[2823]: W0509 01:42:26.496300 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.496624 kubelet[2823]: E0509 01:42:26.496336 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.496732 kubelet[2823]: E0509 01:42:26.496660 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.496732 kubelet[2823]: W0509 01:42:26.496681 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.496845 kubelet[2823]: E0509 01:42:26.496738 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.497398 kubelet[2823]: E0509 01:42:26.497320 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.497398 kubelet[2823]: W0509 01:42:26.497360 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.497398 kubelet[2823]: E0509 01:42:26.497395 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.497844 kubelet[2823]: E0509 01:42:26.497812 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.497844 kubelet[2823]: W0509 01:42:26.497841 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.498269 kubelet[2823]: E0509 01:42:26.498219 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.498806 kubelet[2823]: E0509 01:42:26.498750 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.498806 kubelet[2823]: W0509 01:42:26.498798 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.499339 kubelet[2823]: E0509 01:42:26.499259 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.499698 kubelet[2823]: E0509 01:42:26.499664 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.499698 kubelet[2823]: W0509 01:42:26.499694 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.500005 kubelet[2823]: E0509 01:42:26.499851 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.501169 kubelet[2823]: E0509 01:42:26.500884 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.501169 kubelet[2823]: W0509 01:42:26.500930 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.501169 kubelet[2823]: E0509 01:42:26.501072 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.501424 kubelet[2823]: E0509 01:42:26.501293 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.501424 kubelet[2823]: W0509 01:42:26.501314 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.501635 kubelet[2823]: E0509 01:42:26.501579 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.502075 kubelet[2823]: E0509 01:42:26.501680 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.502075 kubelet[2823]: W0509 01:42:26.501707 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.502075 kubelet[2823]: E0509 01:42:26.501755 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.502556 kubelet[2823]: E0509 01:42:26.502520 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.502704 kubelet[2823]: W0509 01:42:26.502674 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.502873 kubelet[2823]: E0509 01:42:26.502846 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.503364 kubelet[2823]: E0509 01:42:26.503276 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.503364 kubelet[2823]: W0509 01:42:26.503307 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.503364 kubelet[2823]: E0509 01:42:26.503363 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.503833 kubelet[2823]: E0509 01:42:26.503795 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.503833 kubelet[2823]: W0509 01:42:26.503824 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.504050 kubelet[2823]: E0509 01:42:26.503855 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.504691 kubelet[2823]: E0509 01:42:26.504630 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.504691 kubelet[2823]: W0509 01:42:26.504668 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.505159 kubelet[2823]: E0509 01:42:26.505069 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.505159 kubelet[2823]: E0509 01:42:26.505114 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.505159 kubelet[2823]: W0509 01:42:26.505140 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.505386 kubelet[2823]: E0509 01:42:26.505164 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.506114 kubelet[2823]: E0509 01:42:26.505909 2823 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 9 01:42:26.506114 kubelet[2823]: W0509 01:42:26.505941 2823 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 9 01:42:26.506114 kubelet[2823]: E0509 01:42:26.506009 2823 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 9 01:42:26.956317 containerd[1483]: time="2025-05-09T01:42:26.956251950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:26.957998 containerd[1483]: time="2025-05-09T01:42:26.957745085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 9 01:42:26.959164 containerd[1483]: time="2025-05-09T01:42:26.959102162Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:26.961709 containerd[1483]: time="2025-05-09T01:42:26.961662887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:26.962626 containerd[1483]: time="2025-05-09T01:42:26.962418036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.151099324s" May 9 01:42:26.962626 containerd[1483]: time="2025-05-09T01:42:26.962475290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 9 01:42:26.965528 containerd[1483]: time="2025-05-09T01:42:26.965478622Z" level=info msg="CreateContainer within sandbox \"8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 9 01:42:26.981282 containerd[1483]: time="2025-05-09T01:42:26.977230996Z" level=info msg="Container d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f: CDI devices from CRI Config.CDIDevices: []" May 9 01:42:26.993215 containerd[1483]: time="2025-05-09T01:42:26.993174890Z" level=info msg="CreateContainer within sandbox \"8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f\"" May 9 01:42:26.995741 containerd[1483]: time="2025-05-09T01:42:26.994004625Z" level=info msg="StartContainer for \"d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f\"" May 9 01:42:26.995741 containerd[1483]: time="2025-05-09T01:42:26.995676236Z" level=info msg="connecting to shim d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f" address="unix:///run/containerd/s/a3e55e404d4d5b09852083c1246aa8404bcf74b3f0fa928f5642ef4f51d51d4d" protocol=ttrpc version=3 May 9 01:42:27.023218 systemd[1]: Started cri-containerd-d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f.scope - libcontainer container d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f. May 9 01:42:27.084056 containerd[1483]: time="2025-05-09T01:42:27.084003099Z" level=info msg="StartContainer for \"d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f\" returns successfully" May 9 01:42:27.097169 systemd[1]: cri-containerd-d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f.scope: Deactivated successfully. May 9 01:42:27.097849 containerd[1483]: time="2025-05-09T01:42:27.097451120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f\" id:\"d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f\" pid:3439 exited_at:{seconds:1746754947 nanos:96724872}" May 9 01:42:27.097849 containerd[1483]: time="2025-05-09T01:42:27.097550533Z" level=info msg="received exit event container_id:\"d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f\" id:\"d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f\" pid:3439 exited_at:{seconds:1746754947 nanos:96724872}" May 9 01:42:27.137756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f-rootfs.mount: Deactivated successfully. May 9 01:42:27.720055 kubelet[2823]: I0509 01:42:27.719892 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b6897c85-tnk8d" podStartSLOduration=3.934789095 podStartE2EDuration="7.718763106s" podCreationTimestamp="2025-05-09 01:42:20 +0000 UTC" firstStartedPulling="2025-05-09 01:42:21.027086301 +0000 UTC m=+22.219882625" lastFinishedPulling="2025-05-09 01:42:24.811060322 +0000 UTC m=+26.003856636" observedRunningTime="2025-05-09 01:42:25.424735701 +0000 UTC m=+26.617532025" watchObservedRunningTime="2025-05-09 01:42:27.718763106 +0000 UTC m=+28.911559521" May 9 01:42:28.197051 kubelet[2823]: E0509 01:42:28.196064 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7ptc" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" May 9 01:42:28.433339 containerd[1483]: time="2025-05-09T01:42:28.433265818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 9 01:42:30.195706 kubelet[2823]: E0509 01:42:30.195602 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7ptc" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" May 9 01:42:32.197981 kubelet[2823]: E0509 01:42:32.196379 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7ptc" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" May 9 01:42:34.196073 kubelet[2823]: E0509 01:42:34.196018 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7ptc" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" May 9 01:42:35.119378 containerd[1483]: time="2025-05-09T01:42:35.119309429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:35.120917 containerd[1483]: time="2025-05-09T01:42:35.120687795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 9 01:42:35.122468 containerd[1483]: time="2025-05-09T01:42:35.122097850Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:35.124731 containerd[1483]: time="2025-05-09T01:42:35.124697994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:35.125405 containerd[1483]: time="2025-05-09T01:42:35.125372431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 6.691991201s" May 9 01:42:35.125456 containerd[1483]: time="2025-05-09T01:42:35.125406634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 9 01:42:35.129175 containerd[1483]: time="2025-05-09T01:42:35.128917810Z" level=info msg="CreateContainer within sandbox \"8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 9 01:42:35.137541 containerd[1483]: time="2025-05-09T01:42:35.137505576Z" level=info msg="Container 2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d: CDI devices from CRI Config.CDIDevices: []" May 9 01:42:35.163432 containerd[1483]: time="2025-05-09T01:42:35.163358780Z" level=info msg="CreateContainer within sandbox \"8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d\"" May 9 01:42:35.166220 containerd[1483]: time="2025-05-09T01:42:35.166171827Z" level=info msg="StartContainer for \"2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d\"" May 9 01:42:35.169056 containerd[1483]: time="2025-05-09T01:42:35.168942676Z" level=info msg="connecting to shim 2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d" address="unix:///run/containerd/s/a3e55e404d4d5b09852083c1246aa8404bcf74b3f0fa928f5642ef4f51d51d4d" protocol=ttrpc version=3 May 9 01:42:35.218153 systemd[1]: Started cri-containerd-2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d.scope - libcontainer container 2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d. May 9 01:42:35.284495 containerd[1483]: time="2025-05-09T01:42:35.284438234Z" level=info msg="StartContainer for \"2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d\" returns successfully" May 9 01:42:36.196482 kubelet[2823]: E0509 01:42:36.196342 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s7ptc" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" May 9 01:42:36.770409 containerd[1483]: time="2025-05-09T01:42:36.770278849Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 01:42:36.778504 systemd[1]: cri-containerd-2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d.scope: Deactivated successfully. May 9 01:42:36.779019 systemd[1]: cri-containerd-2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d.scope: Consumed 1.024s CPU time, 173M memory peak, 154M written to disk. May 9 01:42:36.782612 containerd[1483]: time="2025-05-09T01:42:36.782217647Z" level=info msg="received exit event container_id:\"2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d\" id:\"2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d\" pid:3499 exited_at:{seconds:1746754956 nanos:781853072}" May 9 01:42:36.784921 containerd[1483]: time="2025-05-09T01:42:36.784820931Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d\" id:\"2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d\" pid:3499 exited_at:{seconds:1746754956 nanos:781853072}" May 9 01:42:36.828476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d-rootfs.mount: Deactivated successfully. May 9 01:42:36.879180 kubelet[2823]: I0509 01:42:36.879122 2823 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 01:42:37.440157 kubelet[2823]: I0509 01:42:37.439999 2823 topology_manager.go:215] "Topology Admit Handler" podUID="c18dbaf7-57a9-4315-a675-a08552fcd54a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9hcs9" May 9 01:42:37.462391 systemd[1]: Created slice kubepods-burstable-podc18dbaf7_57a9_4315_a675_a08552fcd54a.slice - libcontainer container kubepods-burstable-podc18dbaf7_57a9_4315_a675_a08552fcd54a.slice. May 9 01:42:37.603505 kubelet[2823]: I0509 01:42:37.603127 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c18dbaf7-57a9-4315-a675-a08552fcd54a-config-volume\") pod \"coredns-7db6d8ff4d-9hcs9\" (UID: \"c18dbaf7-57a9-4315-a675-a08552fcd54a\") " pod="kube-system/coredns-7db6d8ff4d-9hcs9" May 9 01:42:37.603505 kubelet[2823]: I0509 01:42:37.603243 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gwhx\" (UniqueName: \"kubernetes.io/projected/c18dbaf7-57a9-4315-a675-a08552fcd54a-kube-api-access-8gwhx\") pod \"coredns-7db6d8ff4d-9hcs9\" (UID: \"c18dbaf7-57a9-4315-a675-a08552fcd54a\") " pod="kube-system/coredns-7db6d8ff4d-9hcs9" May 9 01:42:37.753122 kubelet[2823]: I0509 01:42:37.751260 2823 topology_manager.go:215] "Topology Admit Handler" podUID="d7bf2986-6599-4731-b275-b481f64f7f4e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-q2w9q" May 9 01:42:37.760851 kubelet[2823]: I0509 01:42:37.760756 2823 topology_manager.go:215] "Topology Admit Handler" podUID="a7a2f50b-5548-4ade-aed9-490173212465" podNamespace="calico-system" podName="calico-kube-controllers-5fd7846df9-5ctkj" May 9 01:42:37.768477 kubelet[2823]: I0509 01:42:37.768234 2823 topology_manager.go:215] "Topology Admit Handler" podUID="db01ad2c-f776-4d0e-9399-e00a93c0f573" podNamespace="calico-apiserver" podName="calico-apiserver-6b6866ffd4-264lw" May 9 01:42:37.784905 systemd[1]: Created slice kubepods-burstable-podd7bf2986_6599_4731_b275_b481f64f7f4e.slice - libcontainer container kubepods-burstable-podd7bf2986_6599_4731_b275_b481f64f7f4e.slice. May 9 01:42:37.818930 systemd[1]: Created slice kubepods-besteffort-poda7a2f50b_5548_4ade_aed9_490173212465.slice - libcontainer container kubepods-besteffort-poda7a2f50b_5548_4ade_aed9_490173212465.slice. May 9 01:42:37.824715 kubelet[2823]: I0509 01:42:37.823979 2823 topology_manager.go:215] "Topology Admit Handler" podUID="c37c0d0b-f2c5-4e22-9d60-2a762c176d6e" podNamespace="calico-apiserver" podName="calico-apiserver-6b6866ffd4-7xb64" May 9 01:42:37.834017 systemd[1]: Created slice kubepods-besteffort-poddb01ad2c_f776_4d0e_9399_e00a93c0f573.slice - libcontainer container kubepods-besteffort-poddb01ad2c_f776_4d0e_9399_e00a93c0f573.slice. May 9 01:42:37.852917 systemd[1]: Created slice kubepods-besteffort-podc37c0d0b_f2c5_4e22_9d60_2a762c176d6e.slice - libcontainer container kubepods-besteffort-podc37c0d0b_f2c5_4e22_9d60_2a762c176d6e.slice. May 9 01:42:37.906813 kubelet[2823]: I0509 01:42:37.906586 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7bf2986-6599-4731-b275-b481f64f7f4e-config-volume\") pod \"coredns-7db6d8ff4d-q2w9q\" (UID: \"d7bf2986-6599-4731-b275-b481f64f7f4e\") " pod="kube-system/coredns-7db6d8ff4d-q2w9q" May 9 01:42:37.906813 kubelet[2823]: I0509 01:42:37.906652 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c37c0d0b-f2c5-4e22-9d60-2a762c176d6e-calico-apiserver-certs\") pod \"calico-apiserver-6b6866ffd4-7xb64\" (UID: \"c37c0d0b-f2c5-4e22-9d60-2a762c176d6e\") " pod="calico-apiserver/calico-apiserver-6b6866ffd4-7xb64" May 9 01:42:37.906813 kubelet[2823]: I0509 01:42:37.906787 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twg96\" (UniqueName: \"kubernetes.io/projected/d7bf2986-6599-4731-b275-b481f64f7f4e-kube-api-access-twg96\") pod \"coredns-7db6d8ff4d-q2w9q\" (UID: \"d7bf2986-6599-4731-b275-b481f64f7f4e\") " pod="kube-system/coredns-7db6d8ff4d-q2w9q" May 9 01:42:37.907075 kubelet[2823]: I0509 01:42:37.906831 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7a2f50b-5548-4ade-aed9-490173212465-tigera-ca-bundle\") pod \"calico-kube-controllers-5fd7846df9-5ctkj\" (UID: \"a7a2f50b-5548-4ade-aed9-490173212465\") " pod="calico-system/calico-kube-controllers-5fd7846df9-5ctkj" May 9 01:42:37.907075 kubelet[2823]: I0509 01:42:37.907006 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqvmr\" (UniqueName: \"kubernetes.io/projected/a7a2f50b-5548-4ade-aed9-490173212465-kube-api-access-cqvmr\") pod \"calico-kube-controllers-5fd7846df9-5ctkj\" (UID: \"a7a2f50b-5548-4ade-aed9-490173212465\") " pod="calico-system/calico-kube-controllers-5fd7846df9-5ctkj" May 9 01:42:37.907156 kubelet[2823]: I0509 01:42:37.907128 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db01ad2c-f776-4d0e-9399-e00a93c0f573-calico-apiserver-certs\") pod \"calico-apiserver-6b6866ffd4-264lw\" (UID: \"db01ad2c-f776-4d0e-9399-e00a93c0f573\") " pod="calico-apiserver/calico-apiserver-6b6866ffd4-264lw" May 9 01:42:37.907297 kubelet[2823]: I0509 01:42:37.907191 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlgr6\" (UniqueName: \"kubernetes.io/projected/db01ad2c-f776-4d0e-9399-e00a93c0f573-kube-api-access-vlgr6\") pod \"calico-apiserver-6b6866ffd4-264lw\" (UID: \"db01ad2c-f776-4d0e-9399-e00a93c0f573\") " pod="calico-apiserver/calico-apiserver-6b6866ffd4-264lw" May 9 01:42:38.010144 kubelet[2823]: I0509 01:42:38.008505 2823 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s867\" (UniqueName: \"kubernetes.io/projected/c37c0d0b-f2c5-4e22-9d60-2a762c176d6e-kube-api-access-5s867\") pod \"calico-apiserver-6b6866ffd4-7xb64\" (UID: \"c37c0d0b-f2c5-4e22-9d60-2a762c176d6e\") " pod="calico-apiserver/calico-apiserver-6b6866ffd4-7xb64" May 9 01:42:38.069767 containerd[1483]: time="2025-05-09T01:42:38.069694640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hcs9,Uid:c18dbaf7-57a9-4315-a675-a08552fcd54a,Namespace:kube-system,Attempt:0,}" May 9 01:42:38.103029 containerd[1483]: time="2025-05-09T01:42:38.102352977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2w9q,Uid:d7bf2986-6599-4731-b275-b481f64f7f4e,Namespace:kube-system,Attempt:0,}" May 9 01:42:38.130140 containerd[1483]: time="2025-05-09T01:42:38.130101079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fd7846df9-5ctkj,Uid:a7a2f50b-5548-4ade-aed9-490173212465,Namespace:calico-system,Attempt:0,}" May 9 01:42:38.150392 containerd[1483]: time="2025-05-09T01:42:38.150335301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6866ffd4-264lw,Uid:db01ad2c-f776-4d0e-9399-e00a93c0f573,Namespace:calico-apiserver,Attempt:0,}" May 9 01:42:38.156995 containerd[1483]: time="2025-05-09T01:42:38.156932131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6866ffd4-7xb64,Uid:c37c0d0b-f2c5-4e22-9d60-2a762c176d6e,Namespace:calico-apiserver,Attempt:0,}" May 9 01:42:38.193029 containerd[1483]: time="2025-05-09T01:42:38.192925567Z" level=error msg="Failed to destroy network for sandbox \"2f1db9df7d3abf00ce0823d725bd16aa2a079f66e1759d9787becf4dc6f639b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.195152 containerd[1483]: time="2025-05-09T01:42:38.195103862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hcs9,Uid:c18dbaf7-57a9-4315-a675-a08552fcd54a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f1db9df7d3abf00ce0823d725bd16aa2a079f66e1759d9787becf4dc6f639b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.201928 kubelet[2823]: E0509 01:42:38.195634 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f1db9df7d3abf00ce0823d725bd16aa2a079f66e1759d9787becf4dc6f639b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.201928 kubelet[2823]: E0509 01:42:38.195742 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f1db9df7d3abf00ce0823d725bd16aa2a079f66e1759d9787becf4dc6f639b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hcs9" May 9 01:42:38.201928 kubelet[2823]: E0509 01:42:38.195769 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f1db9df7d3abf00ce0823d725bd16aa2a079f66e1759d9787becf4dc6f639b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hcs9" May 9 01:42:38.202132 kubelet[2823]: E0509 01:42:38.195824 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9hcs9_kube-system(c18dbaf7-57a9-4315-a675-a08552fcd54a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9hcs9_kube-system(c18dbaf7-57a9-4315-a675-a08552fcd54a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f1db9df7d3abf00ce0823d725bd16aa2a079f66e1759d9787becf4dc6f639b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9hcs9" podUID="c18dbaf7-57a9-4315-a675-a08552fcd54a" May 9 01:42:38.212587 systemd[1]: Created slice kubepods-besteffort-pod12706b19_c70f_4b8e_b9f1_5ea62d04108c.slice - libcontainer container kubepods-besteffort-pod12706b19_c70f_4b8e_b9f1_5ea62d04108c.slice. May 9 01:42:38.220497 containerd[1483]: time="2025-05-09T01:42:38.220446197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s7ptc,Uid:12706b19-c70f-4b8e-b9f1-5ea62d04108c,Namespace:calico-system,Attempt:0,}" May 9 01:42:38.324336 containerd[1483]: time="2025-05-09T01:42:38.324184204Z" level=error msg="Failed to destroy network for sandbox \"c141dded49924372bb2786b22e375ee99883acbd012b4184764a75ad0f41be38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.326308 containerd[1483]: time="2025-05-09T01:42:38.326156727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2w9q,Uid:d7bf2986-6599-4731-b275-b481f64f7f4e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c141dded49924372bb2786b22e375ee99883acbd012b4184764a75ad0f41be38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.326696 kubelet[2823]: E0509 01:42:38.326554 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c141dded49924372bb2786b22e375ee99883acbd012b4184764a75ad0f41be38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.326696 kubelet[2823]: E0509 01:42:38.326621 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c141dded49924372bb2786b22e375ee99883acbd012b4184764a75ad0f41be38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2w9q" May 9 01:42:38.326696 kubelet[2823]: E0509 01:42:38.326645 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c141dded49924372bb2786b22e375ee99883acbd012b4184764a75ad0f41be38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-q2w9q" May 9 01:42:38.329090 kubelet[2823]: E0509 01:42:38.326694 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q2w9q_kube-system(d7bf2986-6599-4731-b275-b481f64f7f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q2w9q_kube-system(d7bf2986-6599-4731-b275-b481f64f7f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c141dded49924372bb2786b22e375ee99883acbd012b4184764a75ad0f41be38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-q2w9q" podUID="d7bf2986-6599-4731-b275-b481f64f7f4e" May 9 01:42:38.345431 containerd[1483]: time="2025-05-09T01:42:38.345124574Z" level=error msg="Failed to destroy network for sandbox \"5b0978bd2dea585265ac0f37532f365a02c2a32a4ad547fc305795a607bbe7ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.347145 containerd[1483]: time="2025-05-09T01:42:38.347112868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6866ffd4-264lw,Uid:db01ad2c-f776-4d0e-9399-e00a93c0f573,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b0978bd2dea585265ac0f37532f365a02c2a32a4ad547fc305795a607bbe7ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.347622 kubelet[2823]: E0509 01:42:38.347570 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b0978bd2dea585265ac0f37532f365a02c2a32a4ad547fc305795a607bbe7ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.349090 kubelet[2823]: E0509 01:42:38.347644 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b0978bd2dea585265ac0f37532f365a02c2a32a4ad547fc305795a607bbe7ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6866ffd4-264lw" May 9 01:42:38.349090 kubelet[2823]: E0509 01:42:38.347671 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b0978bd2dea585265ac0f37532f365a02c2a32a4ad547fc305795a607bbe7ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6866ffd4-264lw" May 9 01:42:38.349090 kubelet[2823]: E0509 01:42:38.347722 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6866ffd4-264lw_calico-apiserver(db01ad2c-f776-4d0e-9399-e00a93c0f573)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6866ffd4-264lw_calico-apiserver(db01ad2c-f776-4d0e-9399-e00a93c0f573)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b0978bd2dea585265ac0f37532f365a02c2a32a4ad547fc305795a607bbe7ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6866ffd4-264lw" podUID="db01ad2c-f776-4d0e-9399-e00a93c0f573" May 9 01:42:38.361097 containerd[1483]: time="2025-05-09T01:42:38.360673117Z" level=error msg="Failed to destroy network for sandbox \"e547e6f2a25988a8775658042238eafa76fd950232aee9714ae559b9e76a9f11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.362339 containerd[1483]: time="2025-05-09T01:42:38.362269224Z" level=error msg="Failed to destroy network for sandbox \"912073a0f426c75c967b39fed3520e15096c84c7421a309db53d811b6d50f29a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.362537 containerd[1483]: time="2025-05-09T01:42:38.362482780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fd7846df9-5ctkj,Uid:a7a2f50b-5548-4ade-aed9-490173212465,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e547e6f2a25988a8775658042238eafa76fd950232aee9714ae559b9e76a9f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.363048 kubelet[2823]: E0509 01:42:38.362768 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e547e6f2a25988a8775658042238eafa76fd950232aee9714ae559b9e76a9f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.363048 kubelet[2823]: E0509 01:42:38.362888 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e547e6f2a25988a8775658042238eafa76fd950232aee9714ae559b9e76a9f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fd7846df9-5ctkj" May 9 01:42:38.363236 kubelet[2823]: E0509 01:42:38.363166 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e547e6f2a25988a8775658042238eafa76fd950232aee9714ae559b9e76a9f11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fd7846df9-5ctkj" May 9 01:42:38.364015 kubelet[2823]: E0509 01:42:38.363308 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fd7846df9-5ctkj_calico-system(a7a2f50b-5548-4ade-aed9-490173212465)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fd7846df9-5ctkj_calico-system(a7a2f50b-5548-4ade-aed9-490173212465)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e547e6f2a25988a8775658042238eafa76fd950232aee9714ae559b9e76a9f11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fd7846df9-5ctkj" podUID="a7a2f50b-5548-4ade-aed9-490173212465" May 9 01:42:38.367089 containerd[1483]: time="2025-05-09T01:42:38.367026737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6866ffd4-7xb64,Uid:c37c0d0b-f2c5-4e22-9d60-2a762c176d6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"912073a0f426c75c967b39fed3520e15096c84c7421a309db53d811b6d50f29a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.368228 kubelet[2823]: E0509 01:42:38.367629 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912073a0f426c75c967b39fed3520e15096c84c7421a309db53d811b6d50f29a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.368228 kubelet[2823]: E0509 01:42:38.367704 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912073a0f426c75c967b39fed3520e15096c84c7421a309db53d811b6d50f29a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6866ffd4-7xb64" May 9 01:42:38.368228 kubelet[2823]: E0509 01:42:38.367729 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912073a0f426c75c967b39fed3520e15096c84c7421a309db53d811b6d50f29a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b6866ffd4-7xb64" May 9 01:42:38.368406 kubelet[2823]: E0509 01:42:38.367797 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6866ffd4-7xb64_calico-apiserver(c37c0d0b-f2c5-4e22-9d60-2a762c176d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6866ffd4-7xb64_calico-apiserver(c37c0d0b-f2c5-4e22-9d60-2a762c176d6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"912073a0f426c75c967b39fed3520e15096c84c7421a309db53d811b6d50f29a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b6866ffd4-7xb64" podUID="c37c0d0b-f2c5-4e22-9d60-2a762c176d6e" May 9 01:42:38.377218 containerd[1483]: time="2025-05-09T01:42:38.377155199Z" level=error msg="Failed to destroy network for sandbox \"e3860137b17c1fe5faa5d9a9f307e9a9d9ff4291d1f207dea193c333735742d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.379018 containerd[1483]: time="2025-05-09T01:42:38.378921782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s7ptc,Uid:12706b19-c70f-4b8e-b9f1-5ea62d04108c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3860137b17c1fe5faa5d9a9f307e9a9d9ff4291d1f207dea193c333735742d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.379291 kubelet[2823]: E0509 01:42:38.379249 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3860137b17c1fe5faa5d9a9f307e9a9d9ff4291d1f207dea193c333735742d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 9 01:42:38.379390 kubelet[2823]: E0509 01:42:38.379321 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3860137b17c1fe5faa5d9a9f307e9a9d9ff4291d1f207dea193c333735742d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s7ptc" May 9 01:42:38.379390 kubelet[2823]: E0509 01:42:38.379346 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3860137b17c1fe5faa5d9a9f307e9a9d9ff4291d1f207dea193c333735742d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s7ptc" May 9 01:42:38.379559 kubelet[2823]: E0509 01:42:38.379418 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s7ptc_calico-system(12706b19-c70f-4b8e-b9f1-5ea62d04108c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s7ptc_calico-system(12706b19-c70f-4b8e-b9f1-5ea62d04108c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3860137b17c1fe5faa5d9a9f307e9a9d9ff4291d1f207dea193c333735742d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s7ptc" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" May 9 01:42:38.502661 containerd[1483]: time="2025-05-09T01:42:38.502589236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 9 01:42:47.333909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2209147819.mount: Deactivated successfully. May 9 01:42:47.395549 containerd[1483]: time="2025-05-09T01:42:47.395491084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:47.397220 containerd[1483]: time="2025-05-09T01:42:47.397052641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 9 01:42:47.399634 containerd[1483]: time="2025-05-09T01:42:47.398553267Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:47.401831 containerd[1483]: time="2025-05-09T01:42:47.401014221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 01:42:47.401831 containerd[1483]: time="2025-05-09T01:42:47.401645066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.899006457s" May 9 01:42:47.401831 containerd[1483]: time="2025-05-09T01:42:47.401689578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 9 01:42:47.419792 containerd[1483]: time="2025-05-09T01:42:47.419751691Z" level=info msg="CreateContainer within sandbox \"8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 9 01:42:47.435395 containerd[1483]: time="2025-05-09T01:42:47.434953125Z" level=info msg="Container 2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb: CDI devices from CRI Config.CDIDevices: []" May 9 01:42:47.452728 containerd[1483]: time="2025-05-09T01:42:47.452675464Z" level=info msg="CreateContainer within sandbox \"8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\"" May 9 01:42:47.454086 containerd[1483]: time="2025-05-09T01:42:47.454053220Z" level=info msg="StartContainer for \"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\"" May 9 01:42:47.456138 containerd[1483]: time="2025-05-09T01:42:47.456100965Z" level=info msg="connecting to shim 2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb" address="unix:///run/containerd/s/a3e55e404d4d5b09852083c1246aa8404bcf74b3f0fa928f5642ef4f51d51d4d" protocol=ttrpc version=3 May 9 01:42:47.481201 systemd[1]: Started cri-containerd-2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb.scope - libcontainer container 2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb. May 9 01:42:47.502571 kubelet[2823]: I0509 01:42:47.501545 2823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 01:42:47.583666 containerd[1483]: time="2025-05-09T01:42:47.583615773Z" level=info msg="StartContainer for \"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" returns successfully" May 9 01:42:47.683746 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 9 01:42:47.683901 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 9 01:42:48.634144 kubelet[2823]: I0509 01:42:48.634001 2823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-df7bh" podStartSLOduration=2.2863996650000002 podStartE2EDuration="28.633950931s" podCreationTimestamp="2025-05-09 01:42:20 +0000 UTC" firstStartedPulling="2025-05-09 01:42:21.055298093 +0000 UTC m=+22.248094407" lastFinishedPulling="2025-05-09 01:42:47.402849359 +0000 UTC m=+48.595645673" observedRunningTime="2025-05-09 01:42:48.629908629 +0000 UTC m=+49.822704973" watchObservedRunningTime="2025-05-09 01:42:48.633950931 +0000 UTC m=+49.826747275" May 9 01:42:49.199239 containerd[1483]: time="2025-05-09T01:42:49.199190889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2w9q,Uid:d7bf2986-6599-4731-b275-b481f64f7f4e,Namespace:kube-system,Attempt:0,}" May 9 01:42:49.475727 systemd-networkd[1388]: cali468737ed348: Link UP May 9 01:42:49.475935 systemd-networkd[1388]: cali468737ed348: Gained carrier May 9 01:42:49.512510 containerd[1483]: 2025-05-09 01:42:49.252 [INFO][3847] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 9 01:42:49.512510 containerd[1483]: 2025-05-09 01:42:49.296 [INFO][3847] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0 coredns-7db6d8ff4d- kube-system d7bf2986-6599-4731-b275-b481f64f7f4e 710 0 2025-05-09 01:42:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284-0-0-n-bbb05de7dc.novalocal coredns-7db6d8ff4d-q2w9q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali468737ed348 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2w9q" WorkloadEndpoint="ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-" May 9 01:42:49.512510 containerd[1483]: 2025-05-09 01:42:49.296 [INFO][3847] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2w9q" WorkloadEndpoint="ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0" May 9 01:42:49.512510 containerd[1483]: 2025-05-09 01:42:49.368 [INFO][3880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" HandleID="k8s-pod-network.7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" Workload="ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0" May 9 01:42:49.512831 containerd[1483]: 2025-05-09 01:42:49.388 [INFO][3880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" HandleID="k8s-pod-network.7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" Workload="ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291db0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284-0-0-n-bbb05de7dc.novalocal", "pod":"coredns-7db6d8ff4d-q2w9q", "timestamp":"2025-05-09 01:42:49.368816567 +0000 UTC"}, Hostname:"ci-4284-0-0-n-bbb05de7dc.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 9 01:42:49.512831 containerd[1483]: 2025-05-09 01:42:49.389 [INFO][3880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 9 01:42:49.512831 containerd[1483]: 2025-05-09 01:42:49.389 [INFO][3880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 9 01:42:49.512831 containerd[1483]: 2025-05-09 01:42:49.389 [INFO][3880] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-bbb05de7dc.novalocal' May 9 01:42:49.512831 containerd[1483]: 2025-05-09 01:42:49.396 [INFO][3880] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" host="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:42:49.512831 containerd[1483]: 2025-05-09 01:42:49.412 [INFO][3880] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:42:49.512831 containerd[1483]: 2025-05-09 01:42:49.421 [INFO][3880] ipam/ipam.go 489: Trying affinity for 192.168.44.64/26 host="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:42:49.512831 containerd[1483]: 2025-05-09 01:42:49.425 [INFO][3880] ipam/ipam.go 155: Attempting to load block cidr=192.168.44.64/26 host="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:42:49.512831 containerd[1483]: 2025-05-09 01:42:49.429 [INFO][3880] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:42:49.514362 containerd[1483]: 2025-05-09 01:42:49.429 [INFO][3880] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" host="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:42:49.514362 containerd[1483]: 2025-05-09 01:42:49.433 [INFO][3880] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200 May 9 01:42:49.514362 containerd[1483]: 2025-05-09 01:42:49.441 [INFO][3880] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" host="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:42:49.514362 containerd[1483]: 2025-05-09 01:42:49.455 [INFO][3880] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.44.65/26] block=192.168.44.64/26 handle="k8s-pod-network.7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" host="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:42:49.514362 containerd[1483]: 2025-05-09 01:42:49.455 [INFO][3880] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.44.65/26] handle="k8s-pod-network.7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" host="ci-4284-0-0-n-bbb05de7dc.novalocal" May 9 01:42:49.514362 containerd[1483]: 2025-05-09 01:42:49.456 [INFO][3880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 9 01:42:49.514362 containerd[1483]: 2025-05-09 01:42:49.456 [INFO][3880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.44.65/26] IPv6=[] ContainerID="7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" HandleID="k8s-pod-network.7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" Workload="ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0" May 9 01:42:49.516212 containerd[1483]: 2025-05-09 01:42:49.459 [INFO][3847] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2w9q" WorkloadEndpoint="ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d7bf2986-6599-4731-b275-b481f64f7f4e", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 1, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-bbb05de7dc.novalocal", ContainerID:"", Pod:"coredns-7db6d8ff4d-q2w9q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali468737ed348", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 01:42:49.516212 containerd[1483]: 2025-05-09 01:42:49.459 [INFO][3847] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.44.65/32] ContainerID="7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2w9q" WorkloadEndpoint="ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0" May 9 01:42:49.516212 containerd[1483]: 2025-05-09 01:42:49.459 [INFO][3847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali468737ed348 ContainerID="7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2w9q" WorkloadEndpoint="ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0" May 9 01:42:49.516212 containerd[1483]: 2025-05-09 01:42:49.477 [INFO][3847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2w9q" WorkloadEndpoint="ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0" May 9 01:42:49.516212 containerd[1483]: 2025-05-09 01:42:49.480 [INFO][3847] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2w9q" WorkloadEndpoint="ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d7bf2986-6599-4731-b275-b481f64f7f4e", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 9, 1, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-bbb05de7dc.novalocal", ContainerID:"7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200", Pod:"coredns-7db6d8ff4d-q2w9q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali468737ed348", MAC:"ee:13:69:c1:db:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 9 01:42:49.516212 containerd[1483]: 2025-05-09 01:42:49.504 [INFO][3847] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200" Namespace="kube-system" Pod="coredns-7db6d8ff4d-q2w9q" WorkloadEndpoint="ci--4284--0--0--n--bbb05de7dc.novalocal-k8s-coredns--7db6d8ff4d--q2w9q-eth0" May 9 01:42:49.635986 kernel: bpftool[3932]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 9 01:42:49.946291 systemd-networkd[1388]: vxlan.calico: Link UP May 9 01:42:49.946300 systemd-networkd[1388]: vxlan.calico: Gained carrier May 9 01:42:50.170328 kubelet[2823]: I0509 01:42:50.168900 2823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 9 01:42:50.200121 containerd[1483]: time="2025-05-09T01:42:50.199721715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6866ffd4-7xb64,Uid:c37c0d0b-f2c5-4e22-9d60-2a762c176d6e,Namespace:calico-apiserver,Attempt:0,}" May 9 01:42:50.200669 containerd[1483]: time="2025-05-09T01:42:50.200433263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fd7846df9-5ctkj,Uid:a7a2f50b-5548-4ade-aed9-490173212465,Namespace:calico-system,Attempt:0,}" May 9 01:42:50.322705 containerd[1483]: time="2025-05-09T01:42:50.322647715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"b53331d22bc18042d02e64cc3ae4f1adee1c4319138a19520f58021f024c105d\" pid:3999 exit_status:1 exited_at:{seconds:1746754970 nanos:321848195}" May 9 01:42:50.414051 containerd[1483]: time="2025-05-09T01:42:50.413716787Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"99bd8749018f36b24cb6f216c478e0de6a0dc5352b8a13ebd5ddc382a22a7950\" pid:4040 exit_status:1 exited_at:{seconds:1746754970 nanos:413308937}" May 9 01:42:51.198875 containerd[1483]: time="2025-05-09T01:42:51.197505258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hcs9,Uid:c18dbaf7-57a9-4315-a675-a08552fcd54a,Namespace:kube-system,Attempt:0,}" May 9 01:42:51.201755 containerd[1483]: time="2025-05-09T01:42:51.201700033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b6866ffd4-264lw,Uid:db01ad2c-f776-4d0e-9399-e00a93c0f573,Namespace:calico-apiserver,Attempt:0,}" May 9 01:42:51.234150 systemd-networkd[1388]: cali468737ed348: Gained IPv6LL May 9 01:42:51.234596 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL May 9 01:42:52.196784 containerd[1483]: time="2025-05-09T01:42:52.196718762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s7ptc,Uid:12706b19-c70f-4b8e-b9f1-5ea62d04108c,Namespace:calico-system,Attempt:0,}" May 9 01:43:20.195810 kubelet[2823]: E0509 01:43:20.195687 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:20.296026 kubelet[2823]: E0509 01:43:20.295969 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:20.324523 containerd[1483]: time="2025-05-09T01:43:20.324370388Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"a27080bc3f3b491864c02191a98615fb00fea1d36fc607e56aeb05aa3ccf67b9\" pid:4096 exited_at:{seconds:1746755000 nanos:322810776}" May 9 01:43:20.496542 kubelet[2823]: E0509 01:43:20.496253 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:20.897530 kubelet[2823]: E0509 01:43:20.897479 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:21.349069 kubelet[2823]: I0509 01:43:21.348119 2823 setters.go:580] "Node became not ready" node="ci-4284-0-0-n-bbb05de7dc.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-09T01:43:21Z","lastTransitionTime":"2025-05-09T01:43:21Z","reason":"KubeletNotReady","message":"container runtime is down"} May 9 01:43:21.699475 kubelet[2823]: E0509 01:43:21.698240 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:23.299041 kubelet[2823]: E0509 01:43:23.298920 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:26.500164 kubelet[2823]: E0509 01:43:26.499934 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:31.501031 kubelet[2823]: E0509 01:43:31.500728 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:36.502305 kubelet[2823]: E0509 01:43:36.502082 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:41.503161 kubelet[2823]: E0509 01:43:41.502928 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:46.513774 kubelet[2823]: E0509 01:43:46.513685 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:50.404861 containerd[1483]: time="2025-05-09T01:43:50.404341327Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"e4c4293fd1a059e7a110fed99711ceb4fbe30f375d8ab1aae3564ae4d0f2116a\" pid:4133 exited_at:{seconds:1746755030 nanos:403159822}" May 9 01:43:51.514321 kubelet[2823]: E0509 01:43:51.514005 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:43:56.514536 kubelet[2823]: E0509 01:43:56.514373 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:01.515462 kubelet[2823]: E0509 01:44:01.515228 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:06.516422 kubelet[2823]: E0509 01:44:06.516280 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:11.517361 kubelet[2823]: E0509 01:44:11.517109 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:16.517848 kubelet[2823]: E0509 01:44:16.517727 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:20.391018 containerd[1483]: time="2025-05-09T01:44:20.390857963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"5fa25727309c49407533118117d2b60f24cec59466b2e796572f898960da9042\" pid:4170 exited_at:{seconds:1746755060 nanos:390103620}" May 9 01:44:21.519161 kubelet[2823]: E0509 01:44:21.519063 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:26.520092 kubelet[2823]: E0509 01:44:26.519847 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:31.522276 kubelet[2823]: E0509 01:44:31.522136 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:36.523230 kubelet[2823]: E0509 01:44:36.522683 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:41.524538 kubelet[2823]: E0509 01:44:41.524369 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:46.525063 kubelet[2823]: E0509 01:44:46.524826 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:50.422010 containerd[1483]: time="2025-05-09T01:44:50.420880337Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"34b94f7654e80247890d4638f26038be844a5b1fbbd1f7c2c50b2e768521a4f9\" pid:4207 exited_at:{seconds:1746755090 nanos:416279610}" May 9 01:44:51.527168 kubelet[2823]: E0509 01:44:51.526053 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:44:54.387875 kubelet[2823]: E0509 01:44:54.387513 2823 remote_runtime.go:633] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:44:54.387875 kubelet[2823]: E0509 01:44:54.387777 2823 kubelet.go:2885] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:44:56.527657 kubelet[2823]: E0509 01:44:56.527532 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:01.529224 kubelet[2823]: E0509 01:45:01.528627 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:06.529586 kubelet[2823]: E0509 01:45:06.529385 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:11.529840 kubelet[2823]: E0509 01:45:11.529753 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:16.532149 kubelet[2823]: E0509 01:45:16.530921 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:20.400453 containerd[1483]: time="2025-05-09T01:45:20.400265267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"ec46d6d4718eec8164442f7eb6a2dcd9bca7d2db97f86017455b5b7b43fa8bce\" pid:4235 exited_at:{seconds:1746755120 nanos:395661174}" May 9 01:45:21.532078 kubelet[2823]: E0509 01:45:21.531999 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:26.532759 kubelet[2823]: E0509 01:45:26.532517 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:31.537175 kubelet[2823]: E0509 01:45:31.536269 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:36.538101 kubelet[2823]: E0509 01:45:36.537939 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:41.538769 kubelet[2823]: E0509 01:45:41.538616 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:46.539249 kubelet[2823]: E0509 01:45:46.539113 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:50.373069 containerd[1483]: time="2025-05-09T01:45:50.372710383Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"7042f6a730f48a3c03fbdefa91ebeeb27a5d6e0631bd9eb8972e509f5d1dfd13\" pid:4269 exited_at:{seconds:1746755150 nanos:371448279}" May 9 01:45:51.540089 kubelet[2823]: E0509 01:45:51.539788 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:45:56.540473 kubelet[2823]: E0509 01:45:56.540376 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:01.542674 kubelet[2823]: E0509 01:46:01.542590 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:05.444815 systemd[1]: Started sshd@9-172.24.4.153:22-172.24.4.1:40092.service - OpenSSH per-connection server daemon (172.24.4.1:40092). May 9 01:46:06.543615 kubelet[2823]: E0509 01:46:06.543489 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:06.648613 sshd[4302]: Accepted publickey for core from 172.24.4.1 port 40092 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:46:06.654303 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:46:06.673144 systemd-logind[1458]: New session 12 of user core. May 9 01:46:06.684322 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 01:46:07.662744 sshd[4304]: Connection closed by 172.24.4.1 port 40092 May 9 01:46:07.662408 sshd-session[4302]: pam_unix(sshd:session): session closed for user core May 9 01:46:07.675115 systemd[1]: sshd@9-172.24.4.153:22-172.24.4.1:40092.service: Deactivated successfully. May 9 01:46:07.683815 systemd[1]: session-12.scope: Deactivated successfully. May 9 01:46:07.687720 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. May 9 01:46:07.691859 systemd-logind[1458]: Removed session 12. May 9 01:46:11.546128 kubelet[2823]: E0509 01:46:11.545365 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:12.836344 systemd[1]: Started sshd@10-172.24.4.153:22-172.24.4.1:40094.service - OpenSSH per-connection server daemon (172.24.4.1:40094). May 9 01:46:14.178003 sshd[4325]: Accepted publickey for core from 172.24.4.1 port 40094 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:46:14.179525 sshd-session[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:46:14.207604 systemd-logind[1458]: New session 13 of user core. May 9 01:46:14.216393 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 01:46:14.910856 sshd[4329]: Connection closed by 172.24.4.1 port 40094 May 9 01:46:14.911902 sshd-session[4325]: pam_unix(sshd:session): session closed for user core May 9 01:46:14.925322 systemd[1]: sshd@10-172.24.4.153:22-172.24.4.1:40094.service: Deactivated successfully. May 9 01:46:14.934300 systemd[1]: session-13.scope: Deactivated successfully. May 9 01:46:14.938710 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. May 9 01:46:14.941577 systemd-logind[1458]: Removed session 13. May 9 01:46:16.546839 kubelet[2823]: E0509 01:46:16.546751 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:19.933534 systemd[1]: Started sshd@11-172.24.4.153:22-172.24.4.1:58050.service - OpenSSH per-connection server daemon (172.24.4.1:58050). May 9 01:46:20.396506 containerd[1483]: time="2025-05-09T01:46:20.396130933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"1bb81e41443863a5aa57856201b70bb8346fb23f5e8309f53beea1c178e9cbc1\" pid:4357 exited_at:{seconds:1746755180 nanos:394711305}" May 9 01:46:21.182427 sshd[4342]: Accepted publickey for core from 172.24.4.1 port 58050 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:46:21.188833 sshd-session[4342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:46:21.207742 systemd-logind[1458]: New session 14 of user core. May 9 01:46:21.220305 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 01:46:21.549187 kubelet[2823]: E0509 01:46:21.547891 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:21.982238 sshd[4369]: Connection closed by 172.24.4.1 port 58050 May 9 01:46:21.984024 sshd-session[4342]: pam_unix(sshd:session): session closed for user core May 9 01:46:21.989898 systemd[1]: sshd@11-172.24.4.153:22-172.24.4.1:58050.service: Deactivated successfully. May 9 01:46:21.995845 systemd[1]: session-14.scope: Deactivated successfully. May 9 01:46:21.997410 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. May 9 01:46:21.998840 systemd-logind[1458]: Removed session 14. May 9 01:46:26.549329 kubelet[2823]: E0509 01:46:26.549147 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:27.019557 systemd[1]: Started sshd@12-172.24.4.153:22-172.24.4.1:48652.service - OpenSSH per-connection server daemon (172.24.4.1:48652). May 9 01:46:28.214706 sshd[4382]: Accepted publickey for core from 172.24.4.1 port 48652 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:46:28.219389 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:46:28.239098 systemd-logind[1458]: New session 15 of user core. May 9 01:46:28.245323 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 01:46:28.995461 sshd[4384]: Connection closed by 172.24.4.1 port 48652 May 9 01:46:28.997905 sshd-session[4382]: pam_unix(sshd:session): session closed for user core May 9 01:46:29.006871 systemd[1]: sshd@12-172.24.4.153:22-172.24.4.1:48652.service: Deactivated successfully. May 9 01:46:29.013817 systemd[1]: session-15.scope: Deactivated successfully. May 9 01:46:29.015877 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. May 9 01:46:29.018849 systemd-logind[1458]: Removed session 15. May 9 01:46:31.549660 kubelet[2823]: E0509 01:46:31.549489 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:34.034655 systemd[1]: Started sshd@13-172.24.4.153:22-172.24.4.1:33850.service - OpenSSH per-connection server daemon (172.24.4.1:33850). May 9 01:46:35.347796 sshd[4402]: Accepted publickey for core from 172.24.4.1 port 33850 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:46:35.351874 sshd-session[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:46:35.371096 systemd-logind[1458]: New session 16 of user core. May 9 01:46:35.386442 systemd[1]: Started session-16.scope - Session 16 of User core. May 9 01:46:36.190993 sshd[4404]: Connection closed by 172.24.4.1 port 33850 May 9 01:46:36.190075 sshd-session[4402]: pam_unix(sshd:session): session closed for user core May 9 01:46:36.203728 systemd[1]: sshd@13-172.24.4.153:22-172.24.4.1:33850.service: Deactivated successfully. May 9 01:46:36.212048 systemd[1]: session-16.scope: Deactivated successfully. May 9 01:46:36.214029 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. May 9 01:46:36.215696 systemd-logind[1458]: Removed session 16. May 9 01:46:36.550559 kubelet[2823]: E0509 01:46:36.549875 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:41.224098 systemd[1]: Started sshd@14-172.24.4.153:22-172.24.4.1:33860.service - OpenSSH per-connection server daemon (172.24.4.1:33860). May 9 01:46:41.550650 kubelet[2823]: E0509 01:46:41.550438 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:42.380026 sshd[4417]: Accepted publickey for core from 172.24.4.1 port 33860 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:46:42.386046 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:46:42.407040 systemd-logind[1458]: New session 17 of user core. May 9 01:46:42.415574 systemd[1]: Started session-17.scope - Session 17 of User core. May 9 01:46:43.222471 sshd[4419]: Connection closed by 172.24.4.1 port 33860 May 9 01:46:43.224078 sshd-session[4417]: pam_unix(sshd:session): session closed for user core May 9 01:46:43.231278 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. May 9 01:46:43.233059 systemd[1]: sshd@14-172.24.4.153:22-172.24.4.1:33860.service: Deactivated successfully. May 9 01:46:43.243745 systemd[1]: session-17.scope: Deactivated successfully. May 9 01:46:43.251336 systemd-logind[1458]: Removed session 17. May 9 01:46:46.551839 kubelet[2823]: E0509 01:46:46.551583 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:48.256754 systemd[1]: Started sshd@15-172.24.4.153:22-172.24.4.1:38142.service - OpenSSH per-connection server daemon (172.24.4.1:38142). May 9 01:46:49.199058 kubelet[2823]: E0509 01:46:49.198543 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:46:49.199058 kubelet[2823]: E0509 01:46:49.198918 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7db6d8ff4d-q2w9q" May 9 01:46:49.201124 kubelet[2823]: E0509 01:46:49.199098 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7db6d8ff4d-q2w9q" May 9 01:46:49.201124 kubelet[2823]: E0509 01:46:49.199652 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q2w9q_kube-system(d7bf2986-6599-4731-b275-b481f64f7f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q2w9q_kube-system(d7bf2986-6599-4731-b275-b481f64f7f4e)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-7db6d8ff4d-q2w9q" podUID="d7bf2986-6599-4731-b275-b481f64f7f4e" May 9 01:46:49.344331 sshd[4433]: Accepted publickey for core from 172.24.4.1 port 38142 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:46:49.348228 sshd-session[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:46:49.374124 systemd-logind[1458]: New session 18 of user core. May 9 01:46:49.382440 systemd[1]: Started session-18.scope - Session 18 of User core. May 9 01:46:49.963086 sshd[4435]: Connection closed by 172.24.4.1 port 38142 May 9 01:46:49.964531 sshd-session[4433]: pam_unix(sshd:session): session closed for user core May 9 01:46:49.972673 systemd[1]: sshd@15-172.24.4.153:22-172.24.4.1:38142.service: Deactivated successfully. May 9 01:46:49.981450 systemd[1]: session-18.scope: Deactivated successfully. May 9 01:46:49.988840 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. May 9 01:46:49.992652 systemd-logind[1458]: Removed session 18. May 9 01:46:50.054226 containerd[1483]: time="2025-05-09T01:46:50.053867633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2w9q,Uid:d7bf2986-6599-4731-b275-b481f64f7f4e,Namespace:kube-system,Attempt:0,}" May 9 01:46:50.057692 containerd[1483]: time="2025-05-09T01:46:50.054657049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-q2w9q,Uid:d7bf2986-6599-4731-b275-b481f64f7f4e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-7db6d8ff4d-q2w9q_kube-system_d7bf2986-6599-4731-b275-b481f64f7f4e_0\": name \"coredns-7db6d8ff4d-q2w9q_kube-system_d7bf2986-6599-4731-b275-b481f64f7f4e_0\" is reserved for \"7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200\"" May 9 01:46:50.058427 kubelet[2823]: E0509 01:46:50.055460 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-7db6d8ff4d-q2w9q_kube-system_d7bf2986-6599-4731-b275-b481f64f7f4e_0\": name \"coredns-7db6d8ff4d-q2w9q_kube-system_d7bf2986-6599-4731-b275-b481f64f7f4e_0\" is reserved for \"7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200\"" May 9 01:46:50.058427 kubelet[2823]: E0509 01:46:50.055587 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-7db6d8ff4d-q2w9q_kube-system_d7bf2986-6599-4731-b275-b481f64f7f4e_0\": name \"coredns-7db6d8ff4d-q2w9q_kube-system_d7bf2986-6599-4731-b275-b481f64f7f4e_0\" is reserved for \"7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200\"" pod="kube-system/coredns-7db6d8ff4d-q2w9q" May 9 01:46:50.058427 kubelet[2823]: E0509 01:46:50.055787 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-7db6d8ff4d-q2w9q_kube-system_d7bf2986-6599-4731-b275-b481f64f7f4e_0\": name \"coredns-7db6d8ff4d-q2w9q_kube-system_d7bf2986-6599-4731-b275-b481f64f7f4e_0\" is reserved for \"7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200\"" pod="kube-system/coredns-7db6d8ff4d-q2w9q" May 9 01:46:50.058956 kubelet[2823]: E0509 01:46:50.056162 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-q2w9q_kube-system(d7bf2986-6599-4731-b275-b481f64f7f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-q2w9q_kube-system(d7bf2986-6599-4731-b275-b481f64f7f4e)\\\": rpc error: code = Unknown desc = failed to reserve sandbox name \\\"coredns-7db6d8ff4d-q2w9q_kube-system_d7bf2986-6599-4731-b275-b481f64f7f4e_0\\\": name \\\"coredns-7db6d8ff4d-q2w9q_kube-system_d7bf2986-6599-4731-b275-b481f64f7f4e_0\\\" is reserved for \\\"7c89d6a2e08f7cf5bfcd679202e4fafd2f4d59fbe4cc770d3e2164ee6704c200\\\"\"" pod="kube-system/coredns-7db6d8ff4d-q2w9q" podUID="d7bf2986-6599-4731-b275-b481f64f7f4e" May 9 01:46:50.198676 kubelet[2823]: E0509 01:46:50.198469 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:46:50.202111 kubelet[2823]: E0509 01:46:50.199178 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/calico-kube-controllers-5fd7846df9-5ctkj" May 9 01:46:50.202111 kubelet[2823]: E0509 01:46:50.199305 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/calico-kube-controllers-5fd7846df9-5ctkj" May 9 01:46:50.202111 kubelet[2823]: E0509 01:46:50.199623 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fd7846df9-5ctkj_calico-system(a7a2f50b-5548-4ade-aed9-490173212465)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fd7846df9-5ctkj_calico-system(a7a2f50b-5548-4ade-aed9-490173212465)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-system/calico-kube-controllers-5fd7846df9-5ctkj" podUID="a7a2f50b-5548-4ade-aed9-490173212465" May 9 01:46:50.206152 kubelet[2823]: E0509 01:46:50.204456 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:46:50.206152 kubelet[2823]: E0509 01:46:50.204910 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-6b6866ffd4-7xb64" May 9 01:46:50.206152 kubelet[2823]: E0509 01:46:50.205048 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-6b6866ffd4-7xb64" May 9 01:46:50.206152 kubelet[2823]: E0509 01:46:50.205734 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6866ffd4-7xb64_calico-apiserver(c37c0d0b-f2c5-4e22-9d60-2a762c176d6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6866ffd4-7xb64_calico-apiserver(c37c0d0b-f2c5-4e22-9d60-2a762c176d6e)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-apiserver/calico-apiserver-6b6866ffd4-7xb64" podUID="c37c0d0b-f2c5-4e22-9d60-2a762c176d6e" May 9 01:46:50.317324 containerd[1483]: time="2025-05-09T01:46:50.316534285Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"99083315da27eb1942bd3346a1ac2cb266fbd899be3ddc605b54360c7da5f2ab\" pid:4459 exited_at:{seconds:1746755210 nanos:315827863}" May 9 01:46:51.197231 kubelet[2823]: E0509 01:46:51.197056 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:46:51.197231 kubelet[2823]: E0509 01:46:51.197246 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7db6d8ff4d-9hcs9" May 9 01:46:51.200166 kubelet[2823]: E0509 01:46:51.197317 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-7db6d8ff4d-9hcs9" May 9 01:46:51.200166 kubelet[2823]: E0509 01:46:51.197456 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9hcs9_kube-system(c18dbaf7-57a9-4315-a675-a08552fcd54a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9hcs9_kube-system(c18dbaf7-57a9-4315-a675-a08552fcd54a)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-7db6d8ff4d-9hcs9" podUID="c18dbaf7-57a9-4315-a675-a08552fcd54a" May 9 01:46:51.201763 kubelet[2823]: E0509 01:46:51.201599 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:46:51.202432 kubelet[2823]: E0509 01:46:51.202161 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-6b6866ffd4-264lw" May 9 01:46:51.202432 kubelet[2823]: E0509 01:46:51.202231 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-6b6866ffd4-264lw" May 9 01:46:51.202432 kubelet[2823]: E0509 01:46:51.202344 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b6866ffd4-264lw_calico-apiserver(db01ad2c-f776-4d0e-9399-e00a93c0f573)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b6866ffd4-264lw_calico-apiserver(db01ad2c-f776-4d0e-9399-e00a93c0f573)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-apiserver/calico-apiserver-6b6866ffd4-264lw" podUID="db01ad2c-f776-4d0e-9399-e00a93c0f573" May 9 01:46:51.552791 kubelet[2823]: E0509 01:46:51.552529 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:52.064198 containerd[1483]: time="2025-05-09T01:46:52.064068259Z" level=warning msg="container event discarded" container=748f80c43dd4f281001432c6dc884afd775c86017e85aa3e2d14c6db6a361fc6 type=CONTAINER_CREATED_EVENT May 9 01:46:52.064198 containerd[1483]: time="2025-05-09T01:46:52.064161262Z" level=warning msg="container event discarded" container=748f80c43dd4f281001432c6dc884afd775c86017e85aa3e2d14c6db6a361fc6 type=CONTAINER_STARTED_EVENT May 9 01:46:52.079366 containerd[1483]: time="2025-05-09T01:46:52.079292630Z" level=warning msg="container event discarded" container=4df8256ded28f16d5db8fe3b89fa36b5a861f5dbac9c9cbd0a43c2495546f7d9 type=CONTAINER_CREATED_EVENT May 9 01:46:52.079366 containerd[1483]: time="2025-05-09T01:46:52.079319880Z" level=warning msg="container event discarded" container=4df8256ded28f16d5db8fe3b89fa36b5a861f5dbac9c9cbd0a43c2495546f7d9 type=CONTAINER_STARTED_EVENT May 9 01:46:52.079366 containerd[1483]: time="2025-05-09T01:46:52.079329197Z" level=warning msg="container event discarded" container=faa10c70b8d8202a34ee9b4a8f4ae3d072fe2107a045c51d05737ea434d4344c type=CONTAINER_CREATED_EVENT May 9 01:46:52.079366 containerd[1483]: time="2025-05-09T01:46:52.079337883Z" level=warning msg="container event discarded" container=faa10c70b8d8202a34ee9b4a8f4ae3d072fe2107a045c51d05737ea434d4344c type=CONTAINER_STARTED_EVENT May 9 01:46:52.116715 containerd[1483]: time="2025-05-09T01:46:52.116671136Z" level=warning msg="container event discarded" container=2e3a27e5fbcbfbf0ecccdcab7a20e1ba42a136bd38fe033ab812a65fd7bbff20 type=CONTAINER_CREATED_EVENT May 9 01:46:52.153106 containerd[1483]: time="2025-05-09T01:46:52.153038655Z" level=warning msg="container event discarded" container=fa584e48973dd9c101627ebd1781d81912dda3c9317945899997295f7ee5b486 type=CONTAINER_CREATED_EVENT May 9 01:46:52.153106 containerd[1483]: time="2025-05-09T01:46:52.153074140Z" level=warning msg="container event discarded" container=fccc4b6b6194d72a3101e2a9dfdd36b3ec7dd2a6d486c1671f4f5fc3751d06cc type=CONTAINER_CREATED_EVENT May 9 01:46:52.196952 kubelet[2823]: E0509 01:46:52.196564 2823 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:46:52.196952 kubelet[2823]: E0509 01:46:52.196633 2823 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/csi-node-driver-s7ptc" May 9 01:46:52.196952 kubelet[2823]: E0509 01:46:52.196653 2823 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/csi-node-driver-s7ptc" May 9 01:46:52.196952 kubelet[2823]: E0509 01:46:52.196705 2823 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s7ptc_calico-system(12706b19-c70f-4b8e-b9f1-5ea62d04108c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s7ptc_calico-system(12706b19-c70f-4b8e-b9f1-5ea62d04108c)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-system/csi-node-driver-s7ptc" podUID="12706b19-c70f-4b8e-b9f1-5ea62d04108c" May 9 01:46:52.259101 containerd[1483]: time="2025-05-09T01:46:52.259041699Z" level=warning msg="container event discarded" container=2e3a27e5fbcbfbf0ecccdcab7a20e1ba42a136bd38fe033ab812a65fd7bbff20 type=CONTAINER_STARTED_EVENT May 9 01:46:52.315485 containerd[1483]: time="2025-05-09T01:46:52.315392049Z" level=warning msg="container event discarded" container=fccc4b6b6194d72a3101e2a9dfdd36b3ec7dd2a6d486c1671f4f5fc3751d06cc type=CONTAINER_STARTED_EVENT May 9 01:46:52.356107 containerd[1483]: time="2025-05-09T01:46:52.356039620Z" level=warning msg="container event discarded" container=fa584e48973dd9c101627ebd1781d81912dda3c9317945899997295f7ee5b486 type=CONTAINER_STARTED_EVENT May 9 01:46:54.991268 systemd[1]: Started sshd@16-172.24.4.153:22-172.24.4.1:44186.service - OpenSSH per-connection server daemon (172.24.4.1:44186). May 9 01:46:56.199602 sshd[4471]: Accepted publickey for core from 172.24.4.1 port 44186 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:46:56.204682 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:46:56.220086 systemd-logind[1458]: New session 19 of user core. May 9 01:46:56.223133 systemd[1]: Started session-19.scope - Session 19 of User core. May 9 01:46:56.554104 kubelet[2823]: E0509 01:46:56.553783 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:46:57.026531 sshd[4473]: Connection closed by 172.24.4.1 port 44186 May 9 01:46:57.028443 sshd-session[4471]: pam_unix(sshd:session): session closed for user core May 9 01:46:57.037306 systemd[1]: sshd@16-172.24.4.153:22-172.24.4.1:44186.service: Deactivated successfully. May 9 01:46:57.043285 systemd[1]: session-19.scope: Deactivated successfully. May 9 01:46:57.051739 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. May 9 01:46:57.055501 systemd-logind[1458]: Removed session 19. May 9 01:46:59.390492 kubelet[2823]: E0509 01:46:59.389559 2823 remote_runtime.go:633] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:46:59.390492 kubelet[2823]: E0509 01:46:59.390105 2823 kubelet.go:2885] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:47:01.554306 kubelet[2823]: E0509 01:47:01.554098 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:02.065620 systemd[1]: Started sshd@17-172.24.4.153:22-172.24.4.1:44196.service - OpenSSH per-connection server daemon (172.24.4.1:44196). May 9 01:47:03.230089 sshd[4488]: Accepted publickey for core from 172.24.4.1 port 44196 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:47:03.233736 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:47:03.253737 systemd-logind[1458]: New session 20 of user core. May 9 01:47:03.263362 systemd[1]: Started session-20.scope - Session 20 of User core. May 9 01:47:04.015132 sshd[4490]: Connection closed by 172.24.4.1 port 44196 May 9 01:47:04.016888 sshd-session[4488]: pam_unix(sshd:session): session closed for user core May 9 01:47:04.030548 systemd[1]: sshd@17-172.24.4.153:22-172.24.4.1:44196.service: Deactivated successfully. May 9 01:47:04.033905 systemd[1]: session-20.scope: Deactivated successfully. May 9 01:47:04.037446 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. May 9 01:47:04.039438 systemd-logind[1458]: Removed session 20. May 9 01:47:06.555271 kubelet[2823]: E0509 01:47:06.555161 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:09.043038 systemd[1]: Started sshd@18-172.24.4.153:22-172.24.4.1:36342.service - OpenSSH per-connection server daemon (172.24.4.1:36342). May 9 01:47:10.195484 sshd[4504]: Accepted publickey for core from 172.24.4.1 port 36342 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:47:10.200547 sshd-session[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:47:10.219109 systemd-logind[1458]: New session 21 of user core. May 9 01:47:10.227421 systemd[1]: Started session-21.scope - Session 21 of User core. May 9 01:47:10.992223 sshd[4506]: Connection closed by 172.24.4.1 port 36342 May 9 01:47:10.993847 sshd-session[4504]: pam_unix(sshd:session): session closed for user core May 9 01:47:11.003075 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. May 9 01:47:11.004357 systemd[1]: sshd@18-172.24.4.153:22-172.24.4.1:36342.service: Deactivated successfully. May 9 01:47:11.012394 systemd[1]: session-21.scope: Deactivated successfully. May 9 01:47:11.016943 systemd-logind[1458]: Removed session 21. May 9 01:47:11.555743 kubelet[2823]: E0509 01:47:11.555612 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:13.691421 containerd[1483]: time="2025-05-09T01:47:13.690942854Z" level=warning msg="container event discarded" container=1f24a95f222ff7d4179b0ae674283cf72ed0c46b1afb4dd776c2360710e16c9b type=CONTAINER_CREATED_EVENT May 9 01:47:13.691421 containerd[1483]: time="2025-05-09T01:47:13.691382351Z" level=warning msg="container event discarded" container=1f24a95f222ff7d4179b0ae674283cf72ed0c46b1afb4dd776c2360710e16c9b type=CONTAINER_STARTED_EVENT May 9 01:47:13.730465 containerd[1483]: time="2025-05-09T01:47:13.730328934Z" level=warning msg="container event discarded" container=ff14831cadb091a7fe1867415a56a32a81422c6133905327dccac1dfdec2883f type=CONTAINER_CREATED_EVENT May 9 01:47:13.813006 containerd[1483]: time="2025-05-09T01:47:13.812838429Z" level=warning msg="container event discarded" container=ff14831cadb091a7fe1867415a56a32a81422c6133905327dccac1dfdec2883f type=CONTAINER_STARTED_EVENT May 9 01:47:14.026749 containerd[1483]: time="2025-05-09T01:47:14.026377952Z" level=warning msg="container event discarded" container=11eddf98cfd2c924a4dfdff73c06f2f00dda6cce1041a55ae55cbaa576d5e265 type=CONTAINER_CREATED_EVENT May 9 01:47:14.026749 containerd[1483]: time="2025-05-09T01:47:14.026495862Z" level=warning msg="container event discarded" container=11eddf98cfd2c924a4dfdff73c06f2f00dda6cce1041a55ae55cbaa576d5e265 type=CONTAINER_STARTED_EVENT May 9 01:47:16.020129 systemd[1]: Started sshd@19-172.24.4.153:22-172.24.4.1:57602.service - OpenSSH per-connection server daemon (172.24.4.1:57602). May 9 01:47:16.337806 containerd[1483]: time="2025-05-09T01:47:16.337678551Z" level=warning msg="container event discarded" container=e700fa577cd9229c5070b3a198b48b6834856b573fdad72a54a71de5ec2b4c2a type=CONTAINER_CREATED_EVENT May 9 01:47:16.397107 containerd[1483]: time="2025-05-09T01:47:16.396906227Z" level=warning msg="container event discarded" container=e700fa577cd9229c5070b3a198b48b6834856b573fdad72a54a71de5ec2b4c2a type=CONTAINER_STARTED_EVENT May 9 01:47:16.556651 kubelet[2823]: E0509 01:47:16.556495 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:17.366102 sshd[4522]: Accepted publickey for core from 172.24.4.1 port 57602 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:47:17.369649 sshd-session[4522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:47:17.384048 systemd-logind[1458]: New session 22 of user core. May 9 01:47:17.392302 systemd[1]: Started session-22.scope - Session 22 of User core. May 9 01:47:17.975581 sshd[4524]: Connection closed by 172.24.4.1 port 57602 May 9 01:47:17.977345 sshd-session[4522]: pam_unix(sshd:session): session closed for user core May 9 01:47:17.981768 systemd[1]: sshd@19-172.24.4.153:22-172.24.4.1:57602.service: Deactivated successfully. May 9 01:47:17.983945 systemd[1]: session-22.scope: Deactivated successfully. May 9 01:47:17.986271 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. May 9 01:47:17.989164 systemd-logind[1458]: Removed session 22. May 9 01:47:20.333351 containerd[1483]: time="2025-05-09T01:47:20.333238055Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"ac32e2bb2b1b2b25602f41e2b2657071d0a2496b399e74666b41d5cd12e0e3b3\" pid:4551 exited_at:{seconds:1746755240 nanos:330091337}" May 9 01:47:21.036389 containerd[1483]: time="2025-05-09T01:47:21.035861271Z" level=warning msg="container event discarded" container=c0a5c7b5955bd716536af2f067c131cc05e2432216e78d72a304946f19345c3e type=CONTAINER_CREATED_EVENT May 9 01:47:21.036389 containerd[1483]: time="2025-05-09T01:47:21.036199521Z" level=warning msg="container event discarded" container=c0a5c7b5955bd716536af2f067c131cc05e2432216e78d72a304946f19345c3e type=CONTAINER_STARTED_EVENT May 9 01:47:21.063748 containerd[1483]: time="2025-05-09T01:47:21.063600419Z" level=warning msg="container event discarded" container=8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e type=CONTAINER_CREATED_EVENT May 9 01:47:21.064115 containerd[1483]: time="2025-05-09T01:47:21.063753705Z" level=warning msg="container event discarded" container=8935a12f40d0928a675c96b11b63f7b02d647011ac8631b0a0e5f812ccde110e type=CONTAINER_STARTED_EVENT May 9 01:47:21.557712 kubelet[2823]: E0509 01:47:21.557210 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:23.008068 systemd[1]: Started sshd@20-172.24.4.153:22-172.24.4.1:57610.service - OpenSSH per-connection server daemon (172.24.4.1:57610). May 9 01:47:24.134224 sshd[4564]: Accepted publickey for core from 172.24.4.1 port 57610 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:47:24.141734 sshd-session[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:47:24.166169 systemd-logind[1458]: New session 23 of user core. May 9 01:47:24.179361 systemd[1]: Started session-23.scope - Session 23 of User core. May 9 01:47:24.864834 containerd[1483]: time="2025-05-09T01:47:24.864529232Z" level=warning msg="container event discarded" container=107cb795290d84656d5f5c9349924c94a556f3203c1469ae88e1b332d9e48d49 type=CONTAINER_CREATED_EVENT May 9 01:47:24.934984 sshd[4566]: Connection closed by 172.24.4.1 port 57610 May 9 01:47:24.934108 sshd-session[4564]: pam_unix(sshd:session): session closed for user core May 9 01:47:24.943581 systemd[1]: sshd@20-172.24.4.153:22-172.24.4.1:57610.service: Deactivated successfully. May 9 01:47:24.951920 systemd[1]: session-23.scope: Deactivated successfully. May 9 01:47:24.953854 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. May 9 01:47:24.956222 systemd-logind[1458]: Removed session 23. May 9 01:47:24.991424 containerd[1483]: time="2025-05-09T01:47:24.991324053Z" level=warning msg="container event discarded" container=107cb795290d84656d5f5c9349924c94a556f3203c1469ae88e1b332d9e48d49 type=CONTAINER_STARTED_EVENT May 9 01:47:26.558185 kubelet[2823]: E0509 01:47:26.558138 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:27.003205 containerd[1483]: time="2025-05-09T01:47:27.002954509Z" level=warning msg="container event discarded" container=d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f type=CONTAINER_CREATED_EVENT May 9 01:47:27.093181 containerd[1483]: time="2025-05-09T01:47:27.093059029Z" level=warning msg="container event discarded" container=d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f type=CONTAINER_STARTED_EVENT May 9 01:47:27.899845 containerd[1483]: time="2025-05-09T01:47:27.899689226Z" level=warning msg="container event discarded" container=d810b87aae00f1913261930bef2acaa3676997b15c310b9fc4d9b5df95eb760f type=CONTAINER_STOPPED_EVENT May 9 01:47:29.960177 systemd[1]: Started sshd@21-172.24.4.153:22-172.24.4.1:50008.service - OpenSSH per-connection server daemon (172.24.4.1:50008). May 9 01:47:31.132768 sshd[4583]: Accepted publickey for core from 172.24.4.1 port 50008 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:47:31.136162 sshd-session[4583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:47:31.150126 systemd-logind[1458]: New session 24 of user core. May 9 01:47:31.158308 systemd[1]: Started session-24.scope - Session 24 of User core. May 9 01:47:31.559102 kubelet[2823]: E0509 01:47:31.558789 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:31.976210 sshd[4585]: Connection closed by 172.24.4.1 port 50008 May 9 01:47:31.977624 sshd-session[4583]: pam_unix(sshd:session): session closed for user core May 9 01:47:31.987725 systemd[1]: sshd@21-172.24.4.153:22-172.24.4.1:50008.service: Deactivated successfully. May 9 01:47:31.994112 systemd[1]: session-24.scope: Deactivated successfully. May 9 01:47:31.996760 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. May 9 01:47:31.999714 systemd-logind[1458]: Removed session 24. May 9 01:47:35.173009 containerd[1483]: time="2025-05-09T01:47:35.171199137Z" level=warning msg="container event discarded" container=2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d type=CONTAINER_CREATED_EVENT May 9 01:47:35.290075 containerd[1483]: time="2025-05-09T01:47:35.289910009Z" level=warning msg="container event discarded" container=2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d type=CONTAINER_STARTED_EVENT May 9 01:47:36.559560 kubelet[2823]: E0509 01:47:36.559402 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:37.002379 systemd[1]: Started sshd@22-172.24.4.153:22-172.24.4.1:58306.service - OpenSSH per-connection server daemon (172.24.4.1:58306). May 9 01:47:37.812865 containerd[1483]: time="2025-05-09T01:47:37.812701403Z" level=warning msg="container event discarded" container=2533b6dbe0cbf9c4cef0ceca9fe9caa207f7523aa00424837e85f9cebb26f05d type=CONTAINER_STOPPED_EVENT May 9 01:47:38.132032 sshd[4604]: Accepted publickey for core from 172.24.4.1 port 58306 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:47:38.134247 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:47:38.146159 systemd-logind[1458]: New session 25 of user core. May 9 01:47:38.157327 systemd[1]: Started session-25.scope - Session 25 of User core. May 9 01:47:39.020075 sshd[4606]: Connection closed by 172.24.4.1 port 58306 May 9 01:47:39.020597 sshd-session[4604]: pam_unix(sshd:session): session closed for user core May 9 01:47:39.029263 systemd[1]: sshd@22-172.24.4.153:22-172.24.4.1:58306.service: Deactivated successfully. May 9 01:47:39.032532 systemd[1]: session-25.scope: Deactivated successfully. May 9 01:47:39.034103 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. May 9 01:47:39.035944 systemd-logind[1458]: Removed session 25. May 9 01:47:41.559916 kubelet[2823]: E0509 01:47:41.559809 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:44.049455 systemd[1]: Started sshd@23-172.24.4.153:22-172.24.4.1:43510.service - OpenSSH per-connection server daemon (172.24.4.1:43510). May 9 01:47:45.214465 sshd[4621]: Accepted publickey for core from 172.24.4.1 port 43510 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:47:45.219469 sshd-session[4621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:47:45.235199 systemd-logind[1458]: New session 26 of user core. May 9 01:47:45.245423 systemd[1]: Started session-26.scope - Session 26 of User core. May 9 01:47:45.955117 sshd[4623]: Connection closed by 172.24.4.1 port 43510 May 9 01:47:45.958308 sshd-session[4621]: pam_unix(sshd:session): session closed for user core May 9 01:47:45.966430 systemd[1]: sshd@23-172.24.4.153:22-172.24.4.1:43510.service: Deactivated successfully. May 9 01:47:45.971386 systemd[1]: session-26.scope: Deactivated successfully. May 9 01:47:45.975434 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. May 9 01:47:45.978318 systemd-logind[1458]: Removed session 26. May 9 01:47:46.561168 kubelet[2823]: E0509 01:47:46.561049 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:47.461542 containerd[1483]: time="2025-05-09T01:47:47.461290090Z" level=warning msg="container event discarded" container=2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb type=CONTAINER_CREATED_EVENT May 9 01:47:47.592086 containerd[1483]: time="2025-05-09T01:47:47.591909215Z" level=warning msg="container event discarded" container=2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb type=CONTAINER_STARTED_EVENT May 9 01:47:50.285718 containerd[1483]: time="2025-05-09T01:47:50.285656913Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"6d82454edea9a51c6625fe93966eb68961860af73ede0a4adaa698f5abadb066\" pid:4647 exited_at:{seconds:1746755270 nanos:285108250}" May 9 01:47:50.981645 systemd[1]: Started sshd@24-172.24.4.153:22-172.24.4.1:43524.service - OpenSSH per-connection server daemon (172.24.4.1:43524). May 9 01:47:51.561981 kubelet[2823]: E0509 01:47:51.561909 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:52.340664 sshd[4659]: Accepted publickey for core from 172.24.4.1 port 43524 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:47:52.344168 sshd-session[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:47:52.358691 systemd-logind[1458]: New session 27 of user core. May 9 01:47:52.364385 systemd[1]: Started session-27.scope - Session 27 of User core. May 9 01:47:52.975729 sshd[4662]: Connection closed by 172.24.4.1 port 43524 May 9 01:47:52.977077 sshd-session[4659]: pam_unix(sshd:session): session closed for user core May 9 01:47:52.980380 systemd-logind[1458]: Session 27 logged out. Waiting for processes to exit. May 9 01:47:52.981274 systemd[1]: sshd@24-172.24.4.153:22-172.24.4.1:43524.service: Deactivated successfully. May 9 01:47:52.985852 systemd[1]: session-27.scope: Deactivated successfully. May 9 01:47:52.989115 systemd-logind[1458]: Removed session 27. May 9 01:47:56.562925 kubelet[2823]: E0509 01:47:56.562751 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:47:58.002690 systemd[1]: Started sshd@25-172.24.4.153:22-172.24.4.1:45124.service - OpenSSH per-connection server daemon (172.24.4.1:45124). May 9 01:47:59.133059 sshd[4675]: Accepted publickey for core from 172.24.4.1 port 45124 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:47:59.135922 sshd-session[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:47:59.149187 systemd-logind[1458]: New session 28 of user core. May 9 01:47:59.156311 systemd[1]: Started session-28.scope - Session 28 of User core. May 9 01:47:59.973376 sshd[4677]: Connection closed by 172.24.4.1 port 45124 May 9 01:47:59.973194 sshd-session[4675]: pam_unix(sshd:session): session closed for user core May 9 01:47:59.980178 systemd[1]: sshd@25-172.24.4.153:22-172.24.4.1:45124.service: Deactivated successfully. May 9 01:47:59.985459 systemd[1]: session-28.scope: Deactivated successfully. May 9 01:47:59.989497 systemd-logind[1458]: Session 28 logged out. Waiting for processes to exit. May 9 01:47:59.991905 systemd-logind[1458]: Removed session 28. May 9 01:48:01.563568 kubelet[2823]: E0509 01:48:01.563474 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:48:04.993030 systemd[1]: Started sshd@26-172.24.4.153:22-172.24.4.1:35496.service - OpenSSH per-connection server daemon (172.24.4.1:35496). May 9 01:48:06.132806 sshd[4692]: Accepted publickey for core from 172.24.4.1 port 35496 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:48:06.134277 sshd-session[4692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:48:06.148122 systemd-logind[1458]: New session 29 of user core. May 9 01:48:06.157312 systemd[1]: Started session-29.scope - Session 29 of User core. May 9 01:48:06.564098 kubelet[2823]: E0509 01:48:06.563756 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:48:06.968547 sshd[4694]: Connection closed by 172.24.4.1 port 35496 May 9 01:48:06.969909 sshd-session[4692]: pam_unix(sshd:session): session closed for user core May 9 01:48:06.978656 systemd[1]: sshd@26-172.24.4.153:22-172.24.4.1:35496.service: Deactivated successfully. May 9 01:48:06.988107 systemd[1]: session-29.scope: Deactivated successfully. May 9 01:48:06.991408 systemd-logind[1458]: Session 29 logged out. Waiting for processes to exit. May 9 01:48:06.994101 systemd-logind[1458]: Removed session 29. May 9 01:48:11.564188 kubelet[2823]: E0509 01:48:11.564046 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:48:11.990785 systemd[1]: Started sshd@27-172.24.4.153:22-172.24.4.1:35502.service - OpenSSH per-connection server daemon (172.24.4.1:35502). May 9 01:48:13.131048 sshd[4715]: Accepted publickey for core from 172.24.4.1 port 35502 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:48:13.133527 sshd-session[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:48:13.147085 systemd-logind[1458]: New session 30 of user core. May 9 01:48:13.153020 systemd[1]: Started session-30.scope - Session 30 of User core. May 9 01:48:13.984239 sshd[4717]: Connection closed by 172.24.4.1 port 35502 May 9 01:48:13.985646 sshd-session[4715]: pam_unix(sshd:session): session closed for user core May 9 01:48:13.991199 systemd[1]: sshd@27-172.24.4.153:22-172.24.4.1:35502.service: Deactivated successfully. May 9 01:48:13.994576 systemd[1]: session-30.scope: Deactivated successfully. May 9 01:48:13.996117 systemd-logind[1458]: Session 30 logged out. Waiting for processes to exit. May 9 01:48:13.998282 systemd-logind[1458]: Removed session 30. May 9 01:48:16.565504 kubelet[2823]: E0509 01:48:16.565299 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:48:19.010213 systemd[1]: Started sshd@28-172.24.4.153:22-172.24.4.1:56500.service - OpenSSH per-connection server daemon (172.24.4.1:56500). May 9 01:48:20.193029 sshd[4732]: Accepted publickey for core from 172.24.4.1 port 56500 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:48:20.198552 sshd-session[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:48:20.215216 systemd-logind[1458]: New session 31 of user core. May 9 01:48:20.223941 systemd[1]: Started session-31.scope - Session 31 of User core. May 9 01:48:20.327882 containerd[1483]: time="2025-05-09T01:48:20.327757714Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"b59092573f8aa7a1eecad0b65a676133e8ee897d420785b5b02baa98a496a5c1\" pid:4748 exited_at:{seconds:1746755300 nanos:327342469}" May 9 01:48:20.978304 sshd[4741]: Connection closed by 172.24.4.1 port 56500 May 9 01:48:20.979734 sshd-session[4732]: pam_unix(sshd:session): session closed for user core May 9 01:48:20.988203 systemd[1]: sshd@28-172.24.4.153:22-172.24.4.1:56500.service: Deactivated successfully. May 9 01:48:20.993129 systemd[1]: session-31.scope: Deactivated successfully. May 9 01:48:20.995926 systemd-logind[1458]: Session 31 logged out. Waiting for processes to exit. May 9 01:48:20.998933 systemd-logind[1458]: Removed session 31. May 9 01:48:21.566020 kubelet[2823]: E0509 01:48:21.565908 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:48:26.001273 systemd[1]: Started sshd@29-172.24.4.153:22-172.24.4.1:34740.service - OpenSSH per-connection server daemon (172.24.4.1:34740). May 9 01:48:26.567008 kubelet[2823]: E0509 01:48:26.566817 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:48:27.273305 sshd[4771]: Accepted publickey for core from 172.24.4.1 port 34740 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:48:27.276532 sshd-session[4771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:48:27.289105 systemd-logind[1458]: New session 32 of user core. May 9 01:48:27.298321 systemd[1]: Started session-32.scope - Session 32 of User core. May 9 01:48:28.006081 sshd[4773]: Connection closed by 172.24.4.1 port 34740 May 9 01:48:28.006580 sshd-session[4771]: pam_unix(sshd:session): session closed for user core May 9 01:48:28.012456 systemd[1]: sshd@29-172.24.4.153:22-172.24.4.1:34740.service: Deactivated successfully. May 9 01:48:28.012510 systemd-logind[1458]: Session 32 logged out. Waiting for processes to exit. May 9 01:48:28.015676 systemd[1]: session-32.scope: Deactivated successfully. May 9 01:48:28.018183 systemd-logind[1458]: Removed session 32. May 9 01:48:31.567825 kubelet[2823]: E0509 01:48:31.567671 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:48:33.035226 systemd[1]: Started sshd@30-172.24.4.153:22-172.24.4.1:34750.service - OpenSSH per-connection server daemon (172.24.4.1:34750). May 9 01:48:34.216108 sshd[4786]: Accepted publickey for core from 172.24.4.1 port 34750 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:48:34.219844 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:48:34.231892 systemd-logind[1458]: New session 33 of user core. May 9 01:48:34.239293 systemd[1]: Started session-33.scope - Session 33 of User core. May 9 01:48:35.037129 sshd[4788]: Connection closed by 172.24.4.1 port 34750 May 9 01:48:35.037514 sshd-session[4786]: pam_unix(sshd:session): session closed for user core May 9 01:48:35.044032 systemd[1]: sshd@30-172.24.4.153:22-172.24.4.1:34750.service: Deactivated successfully. May 9 01:48:35.046010 systemd[1]: session-33.scope: Deactivated successfully. May 9 01:48:35.048051 systemd-logind[1458]: Session 33 logged out. Waiting for processes to exit. May 9 01:48:35.049698 systemd-logind[1458]: Removed session 33. May 9 01:48:36.568256 kubelet[2823]: E0509 01:48:36.567895 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:48:40.061532 systemd[1]: Started sshd@31-172.24.4.153:22-172.24.4.1:55112.service - OpenSSH per-connection server daemon (172.24.4.1:55112). May 9 01:48:41.308130 sshd[4801]: Accepted publickey for core from 172.24.4.1 port 55112 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:48:41.311613 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:48:41.324527 systemd-logind[1458]: New session 34 of user core. May 9 01:48:41.335339 systemd[1]: Started session-34.scope - Session 34 of User core. May 9 01:48:41.569285 kubelet[2823]: E0509 01:48:41.569068 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:48:41.975558 sshd[4803]: Connection closed by 172.24.4.1 port 55112 May 9 01:48:41.976526 sshd-session[4801]: pam_unix(sshd:session): session closed for user core May 9 01:48:41.981738 systemd[1]: sshd@31-172.24.4.153:22-172.24.4.1:55112.service: Deactivated successfully. May 9 01:48:41.984144 systemd[1]: session-34.scope: Deactivated successfully. May 9 01:48:41.985616 systemd-logind[1458]: Session 34 logged out. Waiting for processes to exit. May 9 01:48:41.987461 systemd-logind[1458]: Removed session 34. May 9 01:48:46.570283 kubelet[2823]: E0509 01:48:46.570171 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:48:47.000218 systemd[1]: Started sshd@32-172.24.4.153:22-172.24.4.1:50398.service - OpenSSH per-connection server daemon (172.24.4.1:50398). May 9 01:48:48.131023 sshd[4818]: Accepted publickey for core from 172.24.4.1 port 50398 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:48:48.132912 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:48:48.143234 systemd-logind[1458]: New session 35 of user core. May 9 01:48:48.152248 systemd[1]: Started session-35.scope - Session 35 of User core. May 9 01:48:48.986911 sshd[4820]: Connection closed by 172.24.4.1 port 50398 May 9 01:48:48.987657 sshd-session[4818]: pam_unix(sshd:session): session closed for user core May 9 01:48:48.993131 systemd-logind[1458]: Session 35 logged out. Waiting for processes to exit. May 9 01:48:48.993923 systemd[1]: sshd@32-172.24.4.153:22-172.24.4.1:50398.service: Deactivated successfully. May 9 01:48:48.997435 systemd[1]: session-35.scope: Deactivated successfully. May 9 01:48:48.999834 systemd-logind[1458]: Removed session 35. May 9 01:48:50.309823 containerd[1483]: time="2025-05-09T01:48:50.309516503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"3a30a0def607e57334d9aa893178863eb33054d6d39f15968ffc79e432dbdc1d\" pid:4845 exited_at:{seconds:1746755330 nanos:308798093}" May 9 01:48:51.571056 kubelet[2823]: E0509 01:48:51.570927 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:48:54.004910 systemd[1]: Started sshd@33-172.24.4.153:22-172.24.4.1:43144.service - OpenSSH per-connection server daemon (172.24.4.1:43144). May 9 01:48:55.263514 sshd[4858]: Accepted publickey for core from 172.24.4.1 port 43144 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:48:55.267194 sshd-session[4858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:48:55.282458 systemd-logind[1458]: New session 36 of user core. May 9 01:48:55.288400 systemd[1]: Started session-36.scope - Session 36 of User core. May 9 01:48:56.005838 sshd[4860]: Connection closed by 172.24.4.1 port 43144 May 9 01:48:56.006573 sshd-session[4858]: pam_unix(sshd:session): session closed for user core May 9 01:48:56.018828 systemd[1]: sshd@33-172.24.4.153:22-172.24.4.1:43144.service: Deactivated successfully. May 9 01:48:56.026882 systemd[1]: session-36.scope: Deactivated successfully. May 9 01:48:56.033405 systemd-logind[1458]: Session 36 logged out. Waiting for processes to exit. May 9 01:48:56.039018 systemd-logind[1458]: Removed session 36. May 9 01:48:56.571801 kubelet[2823]: E0509 01:48:56.571738 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:01.027752 systemd[1]: Started sshd@34-172.24.4.153:22-172.24.4.1:43154.service - OpenSSH per-connection server daemon (172.24.4.1:43154). May 9 01:49:01.572142 kubelet[2823]: E0509 01:49:01.571947 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:02.339113 sshd[4875]: Accepted publickey for core from 172.24.4.1 port 43154 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:49:02.343469 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:49:02.357473 systemd-logind[1458]: New session 37 of user core. May 9 01:49:02.365279 systemd[1]: Started session-37.scope - Session 37 of User core. May 9 01:49:03.265323 sshd[4877]: Connection closed by 172.24.4.1 port 43154 May 9 01:49:03.266792 sshd-session[4875]: pam_unix(sshd:session): session closed for user core May 9 01:49:03.276316 systemd[1]: sshd@34-172.24.4.153:22-172.24.4.1:43154.service: Deactivated successfully. May 9 01:49:03.277172 systemd-logind[1458]: Session 37 logged out. Waiting for processes to exit. May 9 01:49:03.288476 systemd[1]: session-37.scope: Deactivated successfully. May 9 01:49:03.295655 systemd-logind[1458]: Removed session 37. May 9 01:49:03.413827 update_engine[1469]: I20250509 01:49:03.413106 1469 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 9 01:49:03.413827 update_engine[1469]: I20250509 01:49:03.413357 1469 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 9 01:49:03.415288 update_engine[1469]: I20250509 01:49:03.414572 1469 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 9 01:49:03.417948 update_engine[1469]: I20250509 01:49:03.417856 1469 omaha_request_params.cc:62] Current group set to alpha May 9 01:49:03.419872 update_engine[1469]: I20250509 01:49:03.419044 1469 update_attempter.cc:499] Already updated boot flags. Skipping. May 9 01:49:03.419872 update_engine[1469]: I20250509 01:49:03.419092 1469 update_attempter.cc:643] Scheduling an action processor start. May 9 01:49:03.419872 update_engine[1469]: I20250509 01:49:03.419154 1469 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 9 01:49:03.419872 update_engine[1469]: I20250509 01:49:03.419353 1469 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 9 01:49:03.419872 update_engine[1469]: I20250509 01:49:03.419544 1469 omaha_request_action.cc:271] Posting an Omaha request to disabled May 9 01:49:03.419872 update_engine[1469]: I20250509 01:49:03.419572 1469 omaha_request_action.cc:272] Request: May 9 01:49:03.419872 update_engine[1469]: May 9 01:49:03.419872 update_engine[1469]: May 9 01:49:03.419872 update_engine[1469]: May 9 01:49:03.419872 update_engine[1469]: May 9 01:49:03.419872 update_engine[1469]: May 9 01:49:03.419872 update_engine[1469]: May 9 01:49:03.419872 update_engine[1469]: May 9 01:49:03.419872 update_engine[1469]: May 9 01:49:03.419872 update_engine[1469]: I20250509 01:49:03.419596 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 01:49:03.425486 locksmithd[1485]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 9 01:49:03.431616 update_engine[1469]: I20250509 01:49:03.431486 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 01:49:03.433347 update_engine[1469]: I20250509 01:49:03.433246 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 01:49:03.440923 update_engine[1469]: E20250509 01:49:03.440816 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 01:49:03.441137 update_engine[1469]: I20250509 01:49:03.441093 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 9 01:49:04.391826 kubelet[2823]: E0509 01:49:04.391701 2823 remote_runtime.go:633] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:49:04.393036 kubelet[2823]: E0509 01:49:04.392870 2823 kubelet.go:2885] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:49:06.573215 kubelet[2823]: E0509 01:49:06.573118 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:08.288007 systemd[1]: Started sshd@35-172.24.4.153:22-172.24.4.1:51694.service - OpenSSH per-connection server daemon (172.24.4.1:51694). May 9 01:49:09.401820 sshd[4895]: Accepted publickey for core from 172.24.4.1 port 51694 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:49:09.405215 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:49:09.419086 systemd-logind[1458]: New session 38 of user core. May 9 01:49:09.427308 systemd[1]: Started session-38.scope - Session 38 of User core. May 9 01:49:10.234857 sshd[4897]: Connection closed by 172.24.4.1 port 51694 May 9 01:49:10.235478 sshd-session[4895]: pam_unix(sshd:session): session closed for user core May 9 01:49:10.244722 systemd[1]: sshd@35-172.24.4.153:22-172.24.4.1:51694.service: Deactivated successfully. May 9 01:49:10.248882 systemd[1]: session-38.scope: Deactivated successfully. May 9 01:49:10.251789 systemd-logind[1458]: Session 38 logged out. Waiting for processes to exit. May 9 01:49:10.254831 systemd-logind[1458]: Removed session 38. May 9 01:49:11.573836 kubelet[2823]: E0509 01:49:11.573725 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:13.412599 update_engine[1469]: I20250509 01:49:13.412380 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 01:49:13.413500 update_engine[1469]: I20250509 01:49:13.412926 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 01:49:13.413633 update_engine[1469]: I20250509 01:49:13.413548 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 01:49:13.418736 update_engine[1469]: E20250509 01:49:13.418648 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 01:49:13.418887 update_engine[1469]: I20250509 01:49:13.418795 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 9 01:49:15.258364 systemd[1]: Started sshd@36-172.24.4.153:22-172.24.4.1:52540.service - OpenSSH per-connection server daemon (172.24.4.1:52540). May 9 01:49:16.439751 sshd[4918]: Accepted publickey for core from 172.24.4.1 port 52540 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:49:16.443027 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:49:16.454509 systemd-logind[1458]: New session 39 of user core. May 9 01:49:16.464359 systemd[1]: Started session-39.scope - Session 39 of User core. May 9 01:49:16.574006 kubelet[2823]: E0509 01:49:16.573902 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:17.227637 sshd[4920]: Connection closed by 172.24.4.1 port 52540 May 9 01:49:17.228317 sshd-session[4918]: pam_unix(sshd:session): session closed for user core May 9 01:49:17.233530 systemd[1]: sshd@36-172.24.4.153:22-172.24.4.1:52540.service: Deactivated successfully. May 9 01:49:17.236299 systemd[1]: session-39.scope: Deactivated successfully. May 9 01:49:17.237916 systemd-logind[1458]: Session 39 logged out. Waiting for processes to exit. May 9 01:49:17.241085 systemd-logind[1458]: Removed session 39. May 9 01:49:20.292632 containerd[1483]: time="2025-05-09T01:49:20.292527578Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"808ca44cdbee58cd73d7fe319402bb2491edb1e0ed03e857b54394d629c7723b\" pid:4943 exited_at:{seconds:1746755360 nanos:290122877}" May 9 01:49:21.574985 kubelet[2823]: E0509 01:49:21.574885 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:22.255341 systemd[1]: Started sshd@37-172.24.4.153:22-172.24.4.1:52554.service - OpenSSH per-connection server daemon (172.24.4.1:52554). May 9 01:49:23.379529 sshd[4957]: Accepted publickey for core from 172.24.4.1 port 52554 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:49:23.382613 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:49:23.397145 systemd-logind[1458]: New session 40 of user core. May 9 01:49:23.405378 systemd[1]: Started session-40.scope - Session 40 of User core. May 9 01:49:23.412532 update_engine[1469]: I20250509 01:49:23.412163 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 01:49:23.413156 update_engine[1469]: I20250509 01:49:23.412637 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 01:49:23.413352 update_engine[1469]: I20250509 01:49:23.413260 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 01:49:23.419448 update_engine[1469]: E20250509 01:49:23.418896 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 01:49:23.419448 update_engine[1469]: I20250509 01:49:23.419101 1469 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 9 01:49:24.217706 sshd[4959]: Connection closed by 172.24.4.1 port 52554 May 9 01:49:24.219269 sshd-session[4957]: pam_unix(sshd:session): session closed for user core May 9 01:49:24.229601 systemd[1]: sshd@37-172.24.4.153:22-172.24.4.1:52554.service: Deactivated successfully. May 9 01:49:24.237529 systemd[1]: session-40.scope: Deactivated successfully. May 9 01:49:24.246428 systemd-logind[1458]: Session 40 logged out. Waiting for processes to exit. May 9 01:49:24.248880 systemd-logind[1458]: Removed session 40. May 9 01:49:26.575845 kubelet[2823]: E0509 01:49:26.575703 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:29.267265 systemd[1]: Started sshd@38-172.24.4.153:22-172.24.4.1:49986.service - OpenSSH per-connection server daemon (172.24.4.1:49986). May 9 01:49:30.383255 sshd[4972]: Accepted publickey for core from 172.24.4.1 port 49986 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:49:30.387478 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:49:30.403326 systemd-logind[1458]: New session 41 of user core. May 9 01:49:30.413337 systemd[1]: Started session-41.scope - Session 41 of User core. May 9 01:49:31.185180 sshd[4974]: Connection closed by 172.24.4.1 port 49986 May 9 01:49:31.189375 sshd-session[4972]: pam_unix(sshd:session): session closed for user core May 9 01:49:31.210336 systemd[1]: sshd@38-172.24.4.153:22-172.24.4.1:49986.service: Deactivated successfully. May 9 01:49:31.220455 systemd[1]: session-41.scope: Deactivated successfully. May 9 01:49:31.224205 systemd-logind[1458]: Session 41 logged out. Waiting for processes to exit. May 9 01:49:31.230467 systemd-logind[1458]: Removed session 41. May 9 01:49:31.577464 kubelet[2823]: E0509 01:49:31.577040 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:33.415349 update_engine[1469]: I20250509 01:49:33.413697 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 01:49:33.417268 update_engine[1469]: I20250509 01:49:33.417140 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 01:49:33.418772 update_engine[1469]: I20250509 01:49:33.418655 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 01:49:33.425944 update_engine[1469]: E20250509 01:49:33.423938 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 01:49:33.425944 update_engine[1469]: I20250509 01:49:33.424188 1469 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 9 01:49:33.425944 update_engine[1469]: I20250509 01:49:33.424237 1469 omaha_request_action.cc:617] Omaha request response: May 9 01:49:33.425944 update_engine[1469]: E20250509 01:49:33.424683 1469 omaha_request_action.cc:636] Omaha request network transfer failed. May 9 01:49:33.425944 update_engine[1469]: I20250509 01:49:33.425659 1469 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 9 01:49:33.425944 update_engine[1469]: I20250509 01:49:33.425688 1469 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 9 01:49:33.425944 update_engine[1469]: I20250509 01:49:33.425710 1469 update_attempter.cc:306] Processing Done. May 9 01:49:33.426857 update_engine[1469]: E20250509 01:49:33.426803 1469 update_attempter.cc:619] Update failed. May 9 01:49:33.427214 update_engine[1469]: I20250509 01:49:33.427163 1469 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 9 01:49:33.427410 update_engine[1469]: I20250509 01:49:33.427367 1469 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 9 01:49:33.427582 update_engine[1469]: I20250509 01:49:33.427543 1469 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 9 01:49:33.428231 update_engine[1469]: I20250509 01:49:33.428179 1469 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 9 01:49:33.428625 update_engine[1469]: I20250509 01:49:33.428509 1469 omaha_request_action.cc:271] Posting an Omaha request to disabled May 9 01:49:33.430012 update_engine[1469]: I20250509 01:49:33.429172 1469 omaha_request_action.cc:272] Request: May 9 01:49:33.430012 update_engine[1469]: May 9 01:49:33.430012 update_engine[1469]: May 9 01:49:33.430012 update_engine[1469]: May 9 01:49:33.430012 update_engine[1469]: May 9 01:49:33.430012 update_engine[1469]: May 9 01:49:33.430012 update_engine[1469]: May 9 01:49:33.430012 update_engine[1469]: I20250509 01:49:33.429311 1469 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 9 01:49:33.430012 update_engine[1469]: I20250509 01:49:33.429665 1469 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 9 01:49:33.431266 update_engine[1469]: I20250509 01:49:33.431177 1469 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 9 01:49:33.431808 locksmithd[1485]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 9 01:49:33.436829 update_engine[1469]: E20250509 01:49:33.436321 1469 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 9 01:49:33.436829 update_engine[1469]: I20250509 01:49:33.436445 1469 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 9 01:49:33.436829 update_engine[1469]: I20250509 01:49:33.436473 1469 omaha_request_action.cc:617] Omaha request response: May 9 01:49:33.436829 update_engine[1469]: I20250509 01:49:33.436488 1469 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 9 01:49:33.436829 update_engine[1469]: I20250509 01:49:33.436501 1469 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 9 01:49:33.436829 update_engine[1469]: I20250509 01:49:33.436513 1469 update_attempter.cc:306] Processing Done. May 9 01:49:33.436829 update_engine[1469]: I20250509 01:49:33.436527 1469 update_attempter.cc:310] Error event sent. May 9 01:49:33.436829 update_engine[1469]: I20250509 01:49:33.436571 1469 update_check_scheduler.cc:74] Next update check in 45m47s May 9 01:49:33.438753 locksmithd[1485]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 9 01:49:36.219243 systemd[1]: Started sshd@39-172.24.4.153:22-172.24.4.1:52634.service - OpenSSH per-connection server daemon (172.24.4.1:52634). May 9 01:49:36.578163 kubelet[2823]: E0509 01:49:36.577859 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:37.376869 sshd[4987]: Accepted publickey for core from 172.24.4.1 port 52634 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:49:37.378203 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:49:37.392007 systemd-logind[1458]: New session 42 of user core. May 9 01:49:37.397104 systemd[1]: Started session-42.scope - Session 42 of User core. May 9 01:49:38.207216 sshd[4989]: Connection closed by 172.24.4.1 port 52634 May 9 01:49:38.208038 sshd-session[4987]: pam_unix(sshd:session): session closed for user core May 9 01:49:38.216239 systemd-logind[1458]: Session 42 logged out. Waiting for processes to exit. May 9 01:49:38.217065 systemd[1]: sshd@39-172.24.4.153:22-172.24.4.1:52634.service: Deactivated successfully. May 9 01:49:38.223060 systemd[1]: session-42.scope: Deactivated successfully. May 9 01:49:38.228257 systemd-logind[1458]: Removed session 42. May 9 01:49:41.578437 kubelet[2823]: E0509 01:49:41.578356 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:43.243244 systemd[1]: Started sshd@40-172.24.4.153:22-172.24.4.1:52638.service - OpenSSH per-connection server daemon (172.24.4.1:52638). May 9 01:49:44.441472 sshd[5002]: Accepted publickey for core from 172.24.4.1 port 52638 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:49:44.444715 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:49:44.459073 systemd-logind[1458]: New session 43 of user core. May 9 01:49:44.466317 systemd[1]: Started session-43.scope - Session 43 of User core. May 9 01:49:45.262209 sshd[5006]: Connection closed by 172.24.4.1 port 52638 May 9 01:49:45.262885 sshd-session[5002]: pam_unix(sshd:session): session closed for user core May 9 01:49:45.274121 systemd[1]: sshd@40-172.24.4.153:22-172.24.4.1:52638.service: Deactivated successfully. May 9 01:49:45.281174 systemd[1]: session-43.scope: Deactivated successfully. May 9 01:49:45.284168 systemd-logind[1458]: Session 43 logged out. Waiting for processes to exit. May 9 01:49:45.287579 systemd-logind[1458]: Removed session 43. May 9 01:49:46.579660 kubelet[2823]: E0509 01:49:46.579486 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:50.292454 systemd[1]: Started sshd@41-172.24.4.153:22-172.24.4.1:39126.service - OpenSSH per-connection server daemon (172.24.4.1:39126). May 9 01:49:50.345893 containerd[1483]: time="2025-05-09T01:49:50.345776747Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"537e37293fab45ced1cb64fff7bd6a748eff1250ed25a1c7c3104cd3baa6b1bc\" pid:5031 exited_at:{seconds:1746755390 nanos:345072822}" May 9 01:49:51.400910 sshd[5042]: Accepted publickey for core from 172.24.4.1 port 39126 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:49:51.405122 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:49:51.452226 systemd-logind[1458]: New session 44 of user core. May 9 01:49:51.461049 systemd[1]: Started session-44.scope - Session 44 of User core. May 9 01:49:51.580035 kubelet[2823]: E0509 01:49:51.579840 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:52.221519 sshd[5046]: Connection closed by 172.24.4.1 port 39126 May 9 01:49:52.223588 sshd-session[5042]: pam_unix(sshd:session): session closed for user core May 9 01:49:52.233716 systemd-logind[1458]: Session 44 logged out. Waiting for processes to exit. May 9 01:49:52.234627 systemd[1]: sshd@41-172.24.4.153:22-172.24.4.1:39126.service: Deactivated successfully. May 9 01:49:52.241375 systemd[1]: session-44.scope: Deactivated successfully. May 9 01:49:52.248223 systemd-logind[1458]: Removed session 44. May 9 01:49:56.581836 kubelet[2823]: E0509 01:49:56.581338 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:49:57.264515 systemd[1]: Started sshd@42-172.24.4.153:22-172.24.4.1:53364.service - OpenSSH per-connection server daemon (172.24.4.1:53364). May 9 01:49:58.430135 sshd[5060]: Accepted publickey for core from 172.24.4.1 port 53364 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:49:58.432757 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:49:58.442976 systemd-logind[1458]: New session 45 of user core. May 9 01:49:58.447144 systemd[1]: Started session-45.scope - Session 45 of User core. May 9 01:49:59.251063 sshd[5062]: Connection closed by 172.24.4.1 port 53364 May 9 01:49:59.250706 sshd-session[5060]: pam_unix(sshd:session): session closed for user core May 9 01:49:59.267911 systemd[1]: sshd@42-172.24.4.153:22-172.24.4.1:53364.service: Deactivated successfully. May 9 01:49:59.276852 systemd[1]: session-45.scope: Deactivated successfully. May 9 01:49:59.278917 systemd-logind[1458]: Session 45 logged out. Waiting for processes to exit. May 9 01:49:59.282257 systemd-logind[1458]: Removed session 45. May 9 01:50:01.582353 kubelet[2823]: E0509 01:50:01.582170 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:04.284759 systemd[1]: Started sshd@43-172.24.4.153:22-172.24.4.1:52574.service - OpenSSH per-connection server daemon (172.24.4.1:52574). May 9 01:50:05.389825 sshd[5077]: Accepted publickey for core from 172.24.4.1 port 52574 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:05.393427 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:05.409454 systemd-logind[1458]: New session 46 of user core. May 9 01:50:05.417310 systemd[1]: Started session-46.scope - Session 46 of User core. May 9 01:50:06.271710 sshd[5079]: Connection closed by 172.24.4.1 port 52574 May 9 01:50:06.273353 sshd-session[5077]: pam_unix(sshd:session): session closed for user core May 9 01:50:06.285520 systemd[1]: sshd@43-172.24.4.153:22-172.24.4.1:52574.service: Deactivated successfully. May 9 01:50:06.292142 systemd[1]: session-46.scope: Deactivated successfully. May 9 01:50:06.294845 systemd-logind[1458]: Session 46 logged out. Waiting for processes to exit. May 9 01:50:06.297868 systemd-logind[1458]: Removed session 46. May 9 01:50:06.582639 kubelet[2823]: E0509 01:50:06.582421 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:11.311767 systemd[1]: Started sshd@44-172.24.4.153:22-172.24.4.1:52584.service - OpenSSH per-connection server daemon (172.24.4.1:52584). May 9 01:50:11.584309 kubelet[2823]: E0509 01:50:11.583686 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:12.414195 sshd[5091]: Accepted publickey for core from 172.24.4.1 port 52584 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:12.418845 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:12.438523 systemd-logind[1458]: New session 47 of user core. May 9 01:50:12.448385 systemd[1]: Started session-47.scope - Session 47 of User core. May 9 01:50:13.240101 sshd[5093]: Connection closed by 172.24.4.1 port 52584 May 9 01:50:13.242261 sshd-session[5091]: pam_unix(sshd:session): session closed for user core May 9 01:50:13.264397 systemd[1]: sshd@44-172.24.4.153:22-172.24.4.1:52584.service: Deactivated successfully. May 9 01:50:13.272301 systemd[1]: session-47.scope: Deactivated successfully. May 9 01:50:13.277878 systemd-logind[1458]: Session 47 logged out. Waiting for processes to exit. May 9 01:50:13.283690 systemd[1]: Started sshd@45-172.24.4.153:22-172.24.4.1:52590.service - OpenSSH per-connection server daemon (172.24.4.1:52590). May 9 01:50:13.288747 systemd-logind[1458]: Removed session 47. May 9 01:50:14.422390 sshd[5105]: Accepted publickey for core from 172.24.4.1 port 52590 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:14.425812 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:14.438461 systemd-logind[1458]: New session 48 of user core. May 9 01:50:14.448298 systemd[1]: Started session-48.scope - Session 48 of User core. May 9 01:50:15.256228 sshd[5110]: Connection closed by 172.24.4.1 port 52590 May 9 01:50:15.256107 sshd-session[5105]: pam_unix(sshd:session): session closed for user core May 9 01:50:15.266280 systemd[1]: sshd@45-172.24.4.153:22-172.24.4.1:52590.service: Deactivated successfully. May 9 01:50:15.269152 systemd[1]: session-48.scope: Deactivated successfully. May 9 01:50:15.271720 systemd-logind[1458]: Session 48 logged out. Waiting for processes to exit. May 9 01:50:15.274447 systemd[1]: Started sshd@46-172.24.4.153:22-172.24.4.1:45182.service - OpenSSH per-connection server daemon (172.24.4.1:45182). May 9 01:50:15.280285 systemd-logind[1458]: Removed session 48. May 9 01:50:16.584263 kubelet[2823]: E0509 01:50:16.584115 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:16.605057 sshd[5119]: Accepted publickey for core from 172.24.4.1 port 45182 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:16.608289 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:16.620573 systemd-logind[1458]: New session 49 of user core. May 9 01:50:16.630366 systemd[1]: Started session-49.scope - Session 49 of User core. May 9 01:50:17.344094 sshd[5122]: Connection closed by 172.24.4.1 port 45182 May 9 01:50:17.345634 sshd-session[5119]: pam_unix(sshd:session): session closed for user core May 9 01:50:17.353260 systemd[1]: sshd@46-172.24.4.153:22-172.24.4.1:45182.service: Deactivated successfully. May 9 01:50:17.358147 systemd[1]: session-49.scope: Deactivated successfully. May 9 01:50:17.360089 systemd-logind[1458]: Session 49 logged out. Waiting for processes to exit. May 9 01:50:17.362657 systemd-logind[1458]: Removed session 49. May 9 01:50:20.341588 containerd[1483]: time="2025-05-09T01:50:20.341463728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"bebdf3a89b47d4366366e6a494eb18df3a91edae9f659f05a647d035fd84fb83\" pid:5144 exited_at:{seconds:1746755420 nanos:340610033}" May 9 01:50:21.585295 kubelet[2823]: E0509 01:50:21.585222 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:22.373477 systemd[1]: Started sshd@47-172.24.4.153:22-172.24.4.1:45194.service - OpenSSH per-connection server daemon (172.24.4.1:45194). May 9 01:50:23.523048 sshd[5158]: Accepted publickey for core from 172.24.4.1 port 45194 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:23.523891 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:23.529579 systemd-logind[1458]: New session 50 of user core. May 9 01:50:23.539125 systemd[1]: Started session-50.scope - Session 50 of User core. May 9 01:50:24.120729 sshd[5160]: Connection closed by 172.24.4.1 port 45194 May 9 01:50:24.120420 sshd-session[5158]: pam_unix(sshd:session): session closed for user core May 9 01:50:24.136677 systemd[1]: sshd@47-172.24.4.153:22-172.24.4.1:45194.service: Deactivated successfully. May 9 01:50:24.140127 systemd[1]: session-50.scope: Deactivated successfully. May 9 01:50:24.141360 systemd-logind[1458]: Session 50 logged out. Waiting for processes to exit. May 9 01:50:24.146682 systemd[1]: Started sshd@48-172.24.4.153:22-172.24.4.1:43370.service - OpenSSH per-connection server daemon (172.24.4.1:43370). May 9 01:50:24.148407 systemd-logind[1458]: Removed session 50. May 9 01:50:25.346092 sshd[5171]: Accepted publickey for core from 172.24.4.1 port 43370 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:25.348932 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:25.359404 systemd-logind[1458]: New session 51 of user core. May 9 01:50:25.365261 systemd[1]: Started session-51.scope - Session 51 of User core. May 9 01:50:26.368171 sshd[5174]: Connection closed by 172.24.4.1 port 43370 May 9 01:50:26.370656 sshd-session[5171]: pam_unix(sshd:session): session closed for user core May 9 01:50:26.393484 systemd[1]: sshd@48-172.24.4.153:22-172.24.4.1:43370.service: Deactivated successfully. May 9 01:50:26.400209 systemd[1]: session-51.scope: Deactivated successfully. May 9 01:50:26.403660 systemd-logind[1458]: Session 51 logged out. Waiting for processes to exit. May 9 01:50:26.412733 systemd[1]: Started sshd@49-172.24.4.153:22-172.24.4.1:43380.service - OpenSSH per-connection server daemon (172.24.4.1:43380). May 9 01:50:26.417953 systemd-logind[1458]: Removed session 51. May 9 01:50:26.586248 kubelet[2823]: E0509 01:50:26.586141 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:27.682933 sshd[5183]: Accepted publickey for core from 172.24.4.1 port 43380 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:27.685054 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:27.693186 systemd-logind[1458]: New session 52 of user core. May 9 01:50:27.698121 systemd[1]: Started session-52.scope - Session 52 of User core. May 9 01:50:31.331689 sshd[5186]: Connection closed by 172.24.4.1 port 43380 May 9 01:50:31.334490 sshd-session[5183]: pam_unix(sshd:session): session closed for user core May 9 01:50:31.347238 systemd[1]: sshd@49-172.24.4.153:22-172.24.4.1:43380.service: Deactivated successfully. May 9 01:50:31.350332 systemd[1]: session-52.scope: Deactivated successfully. May 9 01:50:31.350730 systemd[1]: session-52.scope: Consumed 984ms CPU time, 64.3M memory peak. May 9 01:50:31.353246 systemd-logind[1458]: Session 52 logged out. Waiting for processes to exit. May 9 01:50:31.356460 systemd[1]: Started sshd@50-172.24.4.153:22-172.24.4.1:43386.service - OpenSSH per-connection server daemon (172.24.4.1:43386). May 9 01:50:31.359604 systemd-logind[1458]: Removed session 52. May 9 01:50:31.586813 kubelet[2823]: E0509 01:50:31.586758 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:32.643282 sshd[5202]: Accepted publickey for core from 172.24.4.1 port 43386 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:32.646776 sshd-session[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:32.662119 systemd-logind[1458]: New session 53 of user core. May 9 01:50:32.672347 systemd[1]: Started session-53.scope - Session 53 of User core. May 9 01:50:33.663988 sshd[5205]: Connection closed by 172.24.4.1 port 43386 May 9 01:50:33.663104 sshd-session[5202]: pam_unix(sshd:session): session closed for user core May 9 01:50:33.677519 systemd[1]: sshd@50-172.24.4.153:22-172.24.4.1:43386.service: Deactivated successfully. May 9 01:50:33.680002 systemd[1]: session-53.scope: Deactivated successfully. May 9 01:50:33.681264 systemd-logind[1458]: Session 53 logged out. Waiting for processes to exit. May 9 01:50:33.687239 systemd[1]: Started sshd@51-172.24.4.153:22-172.24.4.1:36882.service - OpenSSH per-connection server daemon (172.24.4.1:36882). May 9 01:50:33.689578 systemd-logind[1458]: Removed session 53. May 9 01:50:34.802072 sshd[5214]: Accepted publickey for core from 172.24.4.1 port 36882 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:34.806499 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:34.821008 systemd-logind[1458]: New session 54 of user core. May 9 01:50:34.829422 systemd[1]: Started session-54.scope - Session 54 of User core. May 9 01:50:35.632606 sshd[5217]: Connection closed by 172.24.4.1 port 36882 May 9 01:50:35.632465 sshd-session[5214]: pam_unix(sshd:session): session closed for user core May 9 01:50:35.637709 systemd[1]: sshd@51-172.24.4.153:22-172.24.4.1:36882.service: Deactivated successfully. May 9 01:50:35.643724 systemd[1]: session-54.scope: Deactivated successfully. May 9 01:50:35.647288 systemd-logind[1458]: Session 54 logged out. Waiting for processes to exit. May 9 01:50:35.648943 systemd-logind[1458]: Removed session 54. May 9 01:50:36.587884 kubelet[2823]: E0509 01:50:36.587801 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:40.660529 systemd[1]: Started sshd@52-172.24.4.153:22-172.24.4.1:36886.service - OpenSSH per-connection server daemon (172.24.4.1:36886). May 9 01:50:41.588852 kubelet[2823]: E0509 01:50:41.588731 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:41.961423 sshd[5239]: Accepted publickey for core from 172.24.4.1 port 36886 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:41.963902 sshd-session[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:41.977208 systemd-logind[1458]: New session 55 of user core. May 9 01:50:41.986296 systemd[1]: Started session-55.scope - Session 55 of User core. May 9 01:50:42.766322 sshd[5241]: Connection closed by 172.24.4.1 port 36886 May 9 01:50:42.768534 sshd-session[5239]: pam_unix(sshd:session): session closed for user core May 9 01:50:42.779838 systemd[1]: sshd@52-172.24.4.153:22-172.24.4.1:36886.service: Deactivated successfully. May 9 01:50:42.789027 systemd[1]: session-55.scope: Deactivated successfully. May 9 01:50:42.793233 systemd-logind[1458]: Session 55 logged out. Waiting for processes to exit. May 9 01:50:42.795914 systemd-logind[1458]: Removed session 55. May 9 01:50:46.589325 kubelet[2823]: E0509 01:50:46.589159 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:47.797749 systemd[1]: Started sshd@53-172.24.4.153:22-172.24.4.1:60088.service - OpenSSH per-connection server daemon (172.24.4.1:60088). May 9 01:50:49.158852 sshd[5261]: Accepted publickey for core from 172.24.4.1 port 60088 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:49.162554 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:49.178858 systemd-logind[1458]: New session 56 of user core. May 9 01:50:49.186392 systemd[1]: Started session-56.scope - Session 56 of User core. May 9 01:50:49.963788 sshd[5263]: Connection closed by 172.24.4.1 port 60088 May 9 01:50:49.964457 sshd-session[5261]: pam_unix(sshd:session): session closed for user core May 9 01:50:49.972405 systemd[1]: sshd@53-172.24.4.153:22-172.24.4.1:60088.service: Deactivated successfully. May 9 01:50:49.977854 systemd[1]: session-56.scope: Deactivated successfully. May 9 01:50:49.982794 systemd-logind[1458]: Session 56 logged out. Waiting for processes to exit. May 9 01:50:49.985810 systemd-logind[1458]: Removed session 56. May 9 01:50:50.360937 containerd[1483]: time="2025-05-09T01:50:50.360851590Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"215838066839c87b9baf40c99da26f012805a8c77af57d35c85716ba425e954e\" pid:5288 exited_at:{seconds:1746755450 nanos:359643742}" May 9 01:50:51.590098 kubelet[2823]: E0509 01:50:51.589952 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:54.988472 systemd[1]: Started sshd@54-172.24.4.153:22-172.24.4.1:46268.service - OpenSSH per-connection server daemon (172.24.4.1:46268). May 9 01:50:56.281120 sshd[5303]: Accepted publickey for core from 172.24.4.1 port 46268 ssh2: RSA SHA256:o6rQTevsCB7Tos+XSx+N56tMFqugBL0zpqBsIEWC0xQ May 9 01:50:56.285034 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 01:50:56.300691 systemd-logind[1458]: New session 57 of user core. May 9 01:50:56.310395 systemd[1]: Started session-57.scope - Session 57 of User core. May 9 01:50:56.590302 kubelet[2823]: E0509 01:50:56.590229 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:50:57.083576 sshd[5305]: Connection closed by 172.24.4.1 port 46268 May 9 01:50:57.083468 sshd-session[5303]: pam_unix(sshd:session): session closed for user core May 9 01:50:57.086854 systemd[1]: sshd@54-172.24.4.153:22-172.24.4.1:46268.service: Deactivated successfully. May 9 01:50:57.090219 systemd[1]: session-57.scope: Deactivated successfully. May 9 01:50:57.094468 systemd-logind[1458]: Session 57 logged out. Waiting for processes to exit. May 9 01:50:57.097227 systemd-logind[1458]: Removed session 57. May 9 01:51:01.591472 kubelet[2823]: E0509 01:51:01.591359 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:51:06.591715 kubelet[2823]: E0509 01:51:06.591602 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:51:09.395412 kubelet[2823]: E0509 01:51:09.394608 2823 remote_runtime.go:633] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:51:09.395412 kubelet[2823]: E0509 01:51:09.395363 2823 kubelet.go:2885] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:51:11.592780 kubelet[2823]: E0509 01:51:11.592684 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:51:16.593606 kubelet[2823]: E0509 01:51:16.593325 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:51:20.366629 containerd[1483]: time="2025-05-09T01:51:20.366444040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"0b25ff66499c70371cae622fc371b7621a43607b83d3303295cb9ded5699633c\" pid:5341 exited_at:{seconds:1746755480 nanos:365314737}" May 9 01:51:21.594404 kubelet[2823]: E0509 01:51:21.594311 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:51:26.595552 kubelet[2823]: E0509 01:51:26.595459 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:51:31.596792 kubelet[2823]: E0509 01:51:31.596674 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:51:36.598135 kubelet[2823]: E0509 01:51:36.598028 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:51:41.599100 kubelet[2823]: E0509 01:51:41.599003 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:51:46.600109 kubelet[2823]: E0509 01:51:46.600003 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:51:50.309813 containerd[1483]: time="2025-05-09T01:51:50.309651939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"84145ea53fde2434ae2fdc505ea1df18681e5128daa53716b35de29722877b49\" pid:5381 exited_at:{seconds:1746755510 nanos:309169956}" May 9 01:51:51.601257 kubelet[2823]: E0509 01:51:51.601148 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:51:56.601729 kubelet[2823]: E0509 01:51:56.601630 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:01.602270 kubelet[2823]: E0509 01:52:01.602167 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:06.602649 kubelet[2823]: E0509 01:52:06.602571 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:11.603594 kubelet[2823]: E0509 01:52:11.603492 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:16.604634 kubelet[2823]: E0509 01:52:16.604554 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:20.391107 containerd[1483]: time="2025-05-09T01:52:20.391058007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"5f1b65ca2d823dc374f59a9af5cd62f89966d1406b3a913de457be4606ff2e76\" pid:5421 exited_at:{seconds:1746755540 nanos:390609742}" May 9 01:52:21.605142 kubelet[2823]: E0509 01:52:21.605047 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:26.605716 kubelet[2823]: E0509 01:52:26.605569 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:31.606922 kubelet[2823]: E0509 01:52:31.606803 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:36.607769 kubelet[2823]: E0509 01:52:36.607668 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:41.608195 kubelet[2823]: E0509 01:52:41.608094 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:46.609156 kubelet[2823]: E0509 01:52:46.608865 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:50.370490 containerd[1483]: time="2025-05-09T01:52:50.370412667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"45e38b23d58a5abe0c5e20027f0317d5e2d66e0903e2a0963dfecb6ca721bf1e\" pid:5448 exited_at:{seconds:1746755570 nanos:369676727}" May 9 01:52:51.609718 kubelet[2823]: E0509 01:52:51.609553 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:52:56.610809 kubelet[2823]: E0509 01:52:56.610474 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:01.611358 kubelet[2823]: E0509 01:53:01.611193 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:06.611890 kubelet[2823]: E0509 01:53:06.611772 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:11.612956 kubelet[2823]: E0509 01:53:11.612834 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:14.397347 kubelet[2823]: E0509 01:53:14.397080 2823 remote_runtime.go:633] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:53:14.397347 kubelet[2823]: E0509 01:53:14.397257 2823 kubelet.go:2885] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 9 01:53:16.614104 kubelet[2823]: E0509 01:53:16.614026 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:20.346170 containerd[1483]: time="2025-05-09T01:53:20.345997873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"fc480a8bba66ba055bdc25576fc98350ea106e78bf95fb492095298ad00e9d33\" pid:5478 exited_at:{seconds:1746755600 nanos:344480914}" May 9 01:53:21.615166 kubelet[2823]: E0509 01:53:21.615052 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:26.616088 kubelet[2823]: E0509 01:53:26.615994 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:31.616813 kubelet[2823]: E0509 01:53:31.616761 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:36.618161 kubelet[2823]: E0509 01:53:36.617591 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:41.618848 kubelet[2823]: E0509 01:53:41.618724 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:46.619158 kubelet[2823]: E0509 01:53:46.619067 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:50.372228 containerd[1483]: time="2025-05-09T01:53:50.371893419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2717e034cbda61b2d7d13c0c83a09f119bb9115283715bfcd81f73907bf9d2cb\" id:\"81f938ed55e1c212369a285a341f2c9da44524a33191c3475ac673a02c0eaad8\" pid:5512 exited_at:{seconds:1746755630 nanos:367937404}" May 9 01:53:51.619603 kubelet[2823]: E0509 01:53:51.619399 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:53:56.620533 kubelet[2823]: E0509 01:53:56.620406 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down" May 9 01:54:01.620817 kubelet[2823]: E0509 01:54:01.620707 2823 kubelet.go:2361] "Skipping pod synchronization" err="container runtime is down"